OBJECTIVE: A comprehensive evaluation of studies using DNA microarray datasets for screening and identifying key genes in gastric cancer is the goal of this systematic review and meta-analysis. To better understand the molecular environment associated with stomach cancer, this study aims to providea quantitative synthesis of findings. PURPOSE: Using DNA microarray databases in a systematic manner, this study aims to analyze gastric cancer (GC) screening and gene identification efforts. Through a literature review spanning 2002–2022, this research aims to identify key genes associated with GC and develop strategies for screening and prognosis based on these findings. METHODS: The following databases were searched extensively: Science Direct, NCKI, Web of Science, Springer, and PubMed. Fifteen studies met the inclusion and exclusion criteria; 10,134 tissues served as controls and 11,724 as GCs. The levels of critical genes, including COL1A1, COL1A2, THBS2, SPP1, SPARC, COL6A3, and COL3A1, were compared in normal and GC tissues. Rev Man 5.3 was used to do the meta-analysis. While applying models with fixed or random effects, 95% confidence intervals and weighted mean differences were computed. RESULTS According to the meta-analysis, GC tissues exhibited substantially elevated levels of important genes when contrasted with the control group. In particular, there were statistically significant increases in COL1A1 (MD = 2.43, 95% CI: 1.84–3.02), COL1A2 (MD = 2.75, 95% CI: 1.09–4.41), THBS2 (MD = 2.54, 95% CI: 1.66–3.41), SPP1 (MD = 3.64, 95% CI: 3.40–3.88), SPARC (MD = 1.57, 95% CI: 0.37–2.77), COL6A3 (MD = 2.31, 95% CI: 2.02–2.60), and COL3A1 (MD = 2.21, 95% CI: 1.59–2.82). CONCLUSIONS: The COL1A1, THBS2, SPP1, COL6A3, and COL3A1 genes were shown to have potential use in germ cell cancer screening and prognosis, according to this research. Clinical assessment and prognosis of heart failure patients may be theoretically supported by the results of this study.
{"title":"Systematic review and meta-analysis of the screening and identification of key genes in gastric cancer using DNA microarray database","authors":"Wenbiao Duan, Mingjin Yang, Weiliang Sun, Mingmin Xia, Hui Zhu, Chijiang Gu, Haiqiang Zhang","doi":"10.3233/jifs-236416","DOIUrl":"https://doi.org/10.3233/jifs-236416","url":null,"abstract":"OBJECTIVE:\u0000A comprehensive evaluation of studies using DNA microarray datasets for screening and identifying key genes in gastric cancer is the goal of this systematic review and meta-analysis. To better understand the molecular environment associated with stomach cancer, this study aims to providea quantitative synthesis of findings. PURPOSE:\u0000Using DNA microarray databases in a systematic manner, this study aims to analyze gastric cancer (GC) screening and gene identification efforts. Through a literature review spanning 2002–2022, this research aims to identify key genes associated with GC and develop strategies for screening and prognosis based on these findings. METHODS:\u0000The following databases were searched extensively: Science Direct, NCKI, Web of Science, Springer, and PubMed. Fifteen studies met the inclusion and exclusion criteria; 10,134 tissues served as controls and 11,724 as GCs. The levels of critical genes, including COL1A1, COL1A2, THBS2, SPP1, SPARC, COL6A3, and COL3A1, were compared in normal and GC tissues. Rev Man 5.3 was used to do the meta-analysis. While applying models with fixed or random effects, 95% confidence intervals and weighted mean differences were computed. RESULTS\u0000According to the meta-analysis, GC tissues exhibited substantially elevated levels of important genes when contrasted with the control group. In particular, there were statistically significant increases in COL1A1 (MD = 2.43, 95% CI: 1.84–3.02), COL1A2 (MD = 2.75, 95% CI: 1.09–4.41), THBS2 (MD = 2.54, 95% CI: 1.66–3.41), SPP1 (MD = 3.64, 95% CI: 3.40–3.88), SPARC (MD = 1.57, 95% CI: 0.37–2.77), COL6A3 (MD = 2.31, 95% CI: 2.02–2.60), and COL3A1 (MD = 2.21, 95% CI: 1.59–2.82). CONCLUSIONS:\u0000The COL1A1, THBS2, SPP1, COL6A3, and COL3A1 genes were shown to have potential use in germ cell cancer screening and prognosis, according to this research. Clinical assessment and prognosis of heart failure patients may be theoretically supported by the results of this study.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"9 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140799977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background:Due to rapid progress in the fields of artificial intelligence, machine learning and deep learning, the power grids are transforming into Smart Grids (SG) which are versatile, reliable, intelligent and stable. The power consumption of the energy users is varying throughout the day as well as in different days of the week. Power consumption forecasting is of vital importance for the sustainable management and operation of SG. Methodology:In this work, the aim is to apply clustering for dividing a smart residential community into several group of similar profile energy user, which will be effective for developing and training representative deep neural network (DNN) models for power load forecasting of users in respective groups. The DNN models is composed of convolutional neural network (CNN) followed by LSTM layers for feature extraction and sequence learning respectively. The DNN For experimentation, the Smart Grid Smart City (SGSC) project database is used and its energy users are grouped into various clusters. Results:The residential community is divided into four groups of customers based on the chosen criterion where Group 1, 2, 3 and 4 contains 14 percent, 22 percent, 19 percent and 45 percent users respectively. Almost half of the population (45 percent) of the considered residential community exhibits less than 23 outliers in their electricity consumption patterns. The rest of the population is divided into three groups, where specialized deep learning models developed and trained for respective groups are able to achieve higher forecasting accuracy. The results of our proposed approach will assist researchers and utility companies by requiring fewer specialized deep-learning models for accurate forecasting of users who belong to various groups of similar-profile energy consumption.
{"title":"DBSCAN-based energy users clustering for performance enhancement of deep learning model","authors":"Khursheed Aurangzeb","doi":"10.3233/jifs-235873","DOIUrl":"https://doi.org/10.3233/jifs-235873","url":null,"abstract":"Background:Due to rapid progress in the fields of artificial intelligence, machine learning and deep learning, the power grids are transforming into Smart Grids (SG) which are versatile, reliable, intelligent and stable. The power consumption of the energy users is varying throughout the day as well as in different days of the week. Power consumption forecasting is of vital importance for the sustainable management and operation of SG. Methodology:In this work, the aim is to apply clustering for dividing a smart residential community into several group of similar profile energy user, which will be effective for developing and training representative deep neural network (DNN) models for power load forecasting of users in respective groups. The DNN models is composed of convolutional neural network (CNN) followed by LSTM layers for feature extraction and sequence learning respectively. The DNN For experimentation, the Smart Grid Smart City (SGSC) project database is used and its energy users are grouped into various clusters. Results:The residential community is divided into four groups of customers based on the chosen criterion where Group 1, 2, 3 and 4 contains 14 percent, 22 percent, 19 percent and 45 percent users respectively. Almost half of the population (45 percent) of the considered residential community exhibits less than 23 outliers in their electricity consumption patterns. The rest of the population is divided into three groups, where specialized deep learning models developed and trained for respective groups are able to achieve higher forecasting accuracy. The results of our proposed approach will assist researchers and utility companies by requiring fewer specialized deep-learning models for accurate forecasting of users who belong to various groups of similar-profile energy consumption.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"35 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140044840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accounting professionals are increasingly being encouraged to shift their focus from conventional accounting to accounting information as a result of new management strategies and ideas. Cybercrime and other attempts to exploit weaknesses in online systems have become more common in recent years. By introducing the concept of cloud computing and analyzing its logical structure, this research applies the technology and design model to the development of an Accounting Information Management System (AIMS). In accounting information technology administration, efficient resource allocation and decision-making are crucial for optimizing financial performance and strategic planning. Algorithms for dynamic planning are a useful tool in meeting these issues. To maximize efficiency in an accounting group’s allocation of resources, this study employs a dynamic planning method called value iteration. The research presented a new Bayesian optimized Restricted Boltzmann machine (BO-RBM) for acquittal IT management. The data set was first gathered and then pre-processed using z-score normalization. Then, an improved genetic algorithm was used to feature selection. After the system’s design and construction are complete, BO-RBM utilizes to both specify the cloud platform’s distributed storage mode and assess the cluster’s performance. The results show that the algorithm may boost financial performance, increase cost management, and accomplish strategic goals in the IT administration of accounting. The research in this study demonstrates that the cloud platform for handling massive amounts of data may accelerate processes and complete tasks quickly.
由于新的管理战略和理念,会计专业人员越来越多地被鼓励将工作重点从传统会计转向会计信息。近年来,网络犯罪和其他试图利用在线系统弱点的行为越来越常见。本研究通过引入云计算的概念并分析其逻辑结构,将技术和设计模型应用于会计信息管理系统(AIMS)的开发。在会计信息技术管理中,高效的资源分配和决策对于优化财务绩效和战略规划至关重要。动态规划算法是解决这些问题的有用工具。为了最大限度地提高会计集团的资源配置效率,本研究采用了一种称为价值迭代的动态规划方法。研究提出了一种新的贝叶斯优化受限玻尔兹曼机(BO-RBM),用于收购 IT 管理。首先收集数据集,然后使用 z 分数归一化进行预处理。然后,使用改进的遗传算法进行特征选择。在系统设计和构建完成后,BO-RBM 用于指定云平台的分布式存储模式和评估集群性能。研究结果表明,该算法可以提高财务绩效,加强成本管理,实现会计信息化管理的战略目标。本研究表明,处理海量数据的云平台可以加快流程,快速完成任务。
{"title":"Implementation of a dynamic planning algorithm in accounting information technology administration","authors":"Yuan Gao","doi":"10.3233/jifs-234951","DOIUrl":"https://doi.org/10.3233/jifs-234951","url":null,"abstract":"Accounting professionals are increasingly being encouraged to shift their focus from conventional accounting to accounting information as a result of new management strategies and ideas. Cybercrime and other attempts to exploit weaknesses in online systems have become more common in recent years. By introducing the concept of cloud computing and analyzing its logical structure, this research applies the technology and design model to the development of an Accounting Information Management System (AIMS). In accounting information technology administration, efficient resource allocation and decision-making are crucial for optimizing financial performance and strategic planning. Algorithms for dynamic planning are a useful tool in meeting these issues. To maximize efficiency in an accounting group’s allocation of resources, this study employs a dynamic planning method called value iteration. The research presented a new Bayesian optimized Restricted Boltzmann machine (BO-RBM) for acquittal IT management. The data set was first gathered and then pre-processed using z-score normalization. Then, an improved genetic algorithm was used to feature selection. After the system’s design and construction are complete, BO-RBM utilizes to both specify the cloud platform’s distributed storage mode and assess the cluster’s performance. The results show that the algorithm may boost financial performance, increase cost management, and accomplish strategic goals in the IT administration of accounting. The research in this study demonstrates that the cloud platform for handling massive amounts of data may accelerate processes and complete tasks quickly.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"87 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140044847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emotional state recognition is an important part of emotional research. Compared to non-physiological signals, the electroencephalogram (EEG) signals can truly and objectively reflect a person’s emotional state. To explore the multi-frequency band emotional information and address the noise problemof EEG signals, this paper proposes a robust multi-frequency band joint dictionary learning with low-rank representation (RMBDLL). Based on the dictionary learning, the technologies of sparse and low-rank representation are jointly integrated to reveal the intrinsic connections and discriminative information of EEG multi-frequency band. RMBDLL consists of robust dictionary learning and intra-class/inter-class local constraint learning. In robust dictionary learning part, RMBDLL separates complex noise in EEG signals and establishes clean sub-dictionaries on each frequency band to improve the robustness of the model. In this case, different frequency data obtains the same encoding coefficients according to the consistency of emotional state recognition. In intra-class/inter-class local constraint learning part, RMBDLL introduces a regularization term composed of intra-class and inter-class local constraints, which are constructed from the local structural information of dictionary atoms, resulting in intra-class similarity and inter-class difference of EEG multi-frequency bands. The effectiveness of RMBDLL is verified on the SEED dataset with different noises. The experimental results show that the RMBDLL algorithm can maintain the discriminative local structure in the training samples and achieve good recognition performance on noisy EEG emotion datasets.
{"title":"Robust multi-frequency band joint dictionary learning with low-rank representation","authors":"Huafeng Ding, Junyan Shang, Guohua Zhou","doi":"10.3233/jifs-233753","DOIUrl":"https://doi.org/10.3233/jifs-233753","url":null,"abstract":"Emotional state recognition is an important part of emotional research. Compared to non-physiological signals, the electroencephalogram (EEG) signals can truly and objectively reflect a person’s emotional state. To explore the multi-frequency band emotional information and address the noise problemof EEG signals, this paper proposes a robust multi-frequency band joint dictionary learning with low-rank representation (RMBDLL). Based on the dictionary learning, the technologies of sparse and low-rank representation are jointly integrated to reveal the intrinsic connections and discriminative information of EEG multi-frequency band. RMBDLL consists of robust dictionary learning and intra-class/inter-class local constraint learning. In robust dictionary learning part, RMBDLL separates complex noise in EEG signals and establishes clean sub-dictionaries on each frequency band to improve the robustness of the model. In this case, different frequency data obtains the same encoding coefficients according to the consistency of emotional state recognition. In intra-class/inter-class local constraint learning part, RMBDLL introduces a regularization term composed of intra-class and inter-class local constraints, which are constructed from the local structural information of dictionary atoms, resulting in intra-class similarity and inter-class difference of EEG multi-frequency bands. The effectiveness of RMBDLL is verified on the SEED dataset with different noises. The experimental results show that the RMBDLL algorithm can maintain the discriminative local structure in the training samples and achieve good recognition performance on noisy EEG emotion datasets.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"17 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed flexible flowshop scheduling is getting more important in the large-scale panel furniture industry. It is vital for a higher manufacturing efficiency and economic profit. The distributed scheduling problem with lot-streaming in a flexible flow shop environment is investigated in this work. Furthermore, the actual constraints of packaging collaborative and machine setup times are considered in the proposed approach. The average order waiting time for packaging and average order delay rate is used as objectives. Non-dominated sorting method is used to handle this bi-objective optimization problem. An improved encoding method was proposed to address the large-scale orders that need to be divided into sub-lots based on genetic algorithm. The proposed approach is firstly validated by benchmark with other multi-objectives evolutionary algorithms. The results found that the proposed approach had a good convergence and diversity. Besides, the influence of the proportion of large-scale orders priority level and sub-lot size was investigated in a panel furniture manufacturing scenario. The results can be concluded that the enterprise could obtain shorter order average waiting time and delay rate when the sub-lot sizes were set as two and the order priority level was allocated in the proportion of 1:2:3:4:5.
{"title":"Investigation on distributed scheduling with lot-streaming considering setup time based on NSGA-II in a furniture intelligent manufacturing","authors":"Jinxin Wang, Zhanwen Wu, Longzhi Yang, Wei Hu, Chaojun Song, Zhaolong Zhu, Xiaolei Guo, Pingxiang Cao","doi":"10.3233/jifs-237378","DOIUrl":"https://doi.org/10.3233/jifs-237378","url":null,"abstract":"Distributed flexible flowshop scheduling is getting more important in the large-scale panel furniture industry. It is vital for a higher manufacturing efficiency and economic profit. The distributed scheduling problem with lot-streaming in a flexible flow shop environment is investigated in this work. Furthermore, the actual constraints of packaging collaborative and machine setup times are considered in the proposed approach. The average order waiting time for packaging and average order delay rate is used as objectives. Non-dominated sorting method is used to handle this bi-objective optimization problem. An improved encoding method was proposed to address the large-scale orders that need to be divided into sub-lots based on genetic algorithm. The proposed approach is firstly validated by benchmark with other multi-objectives evolutionary algorithms. The results found that the proposed approach had a good convergence and diversity. Besides, the influence of the proportion of large-scale orders priority level and sub-lot size was investigated in a panel furniture manufacturing scenario. The results can be concluded that the enterprise could obtain shorter order average waiting time and delay rate when the sub-lot sizes were set as two and the order priority level was allocated in the proportion of 1:2:3:4:5.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"30 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to intensified off-balance sheet disclosure by regulatory authorities, financial reports now contain a substantial amount of information beyond the financial statements. Consequently, the length of footnotes in financial reports exceeds that of the financial statements. This poses a novel challenge for regulators and users of financial reports in efficiently managing this information. Financial reports, with their clear structure, encompass abundant structured information applicable to information extraction, automatic summarization, and information retrieval. Extracting headings and paragraph content from financial reports enables the acquisition of the annual report text’s framework. This paper focuses on extracting the structural framework of annual report texts and introduces an OpenCV-based method for text framework extraction using computer vision. The proposed method employs morphological image dilation to distinguish headings from the main body of the text. Moreover, this paper combines the proposed method with a traditional, rule-based extraction method that exploits the characteristic features of numbers and symbols at the beginning of headings. This combination results in an optimized framework extraction method, producing a more concise text framework.
{"title":"A text extraction framework of financial report in traditional format with OpenCV","authors":"Jiaxin Wei, Jin Yang, Xinyang Liu","doi":"10.3233/jifs-234170","DOIUrl":"https://doi.org/10.3233/jifs-234170","url":null,"abstract":"Due to intensified off-balance sheet disclosure by regulatory authorities, financial reports now contain a substantial amount of information beyond the financial statements. Consequently, the length of footnotes in financial reports exceeds that of the financial statements. This poses a novel challenge for regulators and users of financial reports in efficiently managing this information. Financial reports, with their clear structure, encompass abundant structured information applicable to information extraction, automatic summarization, and information retrieval. Extracting headings and paragraph content from financial reports enables the acquisition of the annual report text’s framework. This paper focuses on extracting the structural framework of annual report texts and introduces an OpenCV-based method for text framework extraction using computer vision. The proposed method employs morphological image dilation to distinguish headings from the main body of the text. Moreover, this paper combines the proposed method with a traditional, rule-based extraction method that exploits the characteristic features of numbers and symbols at the beginning of headings. This combination results in an optimized framework extraction method, producing a more concise text framework.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"29 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The multi-attribute decision-making (MADM) methods can deeply mine hidden information in data and make a more reliable decision with actual needs and human cognition. For this reason, this paper proposes the bipolar N-soft PROMETHEE (preference ranking organization method for enrichment of evaluation) method. The method fully embodies the advantages of the PROMETHEE method, which can limit the unconditional compensation between attribute values and effectively reflect the priority between attribute values. Further, by introducing an attribute threshold to filter research objects, the proposed method not only dramatically reduces the amount of computation but also considers the impact of the size of the attribute value itself on decision-making. Secondly, the paper proposes the concepts of attribute praise, attribute popularity, total praise, and total popularity for the first time, fully mining information from bipolar N-soft sets, which can effectively handle situations where attribute values have different orders of magnitude. In addition, this paper presents the decision-making process of the new method, closely integrating theoretical models with real life. Finally, this paper analyses and compares the proposed method with the existing ones, further verifying the effectiveness and flexibility of the proposed method.
多属性决策(MADM)方法可以深入挖掘数据中隐藏的信息,并结合实际需求和人类认知做出更可靠的决策。为此,本文提出了双极 N 软 PROMETHEE(丰富评价的偏好排序组织法)方法。该方法充分体现了 PROMETHEE 方法的优点,可以限制属性值之间的无条件补偿,有效反映属性值之间的优先级。此外,通过引入属性阈值来筛选研究对象,该方法不仅大大减少了计算量,还考虑了属性值本身的大小对决策的影响。其次,本文首次提出了属性好评度、属性受欢迎度、总好评度和总受欢迎度的概念,充分挖掘了双极性 N 软集的信息,可以有效处理属性值具有不同数量级的情况。此外,本文还介绍了新方法的决策过程,将理论模型与实际生活紧密结合。最后,本文对所提出的方法与现有方法进行了分析和比较,进一步验证了所提出方法的有效性和灵活性。
{"title":"Multi-attribute decision-making analysis based on the bipolar N-soft PROMETHEE method","authors":"Xiao-Guang Zhou, Ya-Nan Chen, Jia-Xi Ji","doi":"10.3233/jifs-236404","DOIUrl":"https://doi.org/10.3233/jifs-236404","url":null,"abstract":"The multi-attribute decision-making (MADM) methods can deeply mine hidden information in data and make a more reliable decision with actual needs and human cognition. For this reason, this paper proposes the bipolar N-soft PROMETHEE (preference ranking organization method for enrichment of evaluation) method. The method fully embodies the advantages of the PROMETHEE method, which can limit the unconditional compensation between attribute values and effectively reflect the priority between attribute values. Further, by introducing an attribute threshold to filter research objects, the proposed method not only dramatically reduces the amount of computation but also considers the impact of the size of the attribute value itself on decision-making. Secondly, the paper proposes the concepts of attribute praise, attribute popularity, total praise, and total popularity for the first time, fully mining information from bipolar N-soft sets, which can effectively handle situations where attribute values have different orders of magnitude. In addition, this paper presents the decision-making process of the new method, closely integrating theoretical models with real life. Finally, this paper analyses and compares the proposed method with the existing ones, further verifying the effectiveness and flexibility of the proposed method.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"162 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to expand the variable domain and monotonicity of aggregation functions to generate new aggregation functions is an important research content in aggregation functions. In this work, the concept of interval-valued pre-(quasi-)grouping functions is given by relaxing the interval monotonicity ofinterval-valued (quasi-)grouping functions to interval directional monotonicity. Then, some basic properties of interval-valued pre-(quasi-)grouping functions and the relationship between interval-valued pre-(quasi-)grouping functions and pre-(quasi-)grouping functions are presented. Accordingly, several construction methods of interval-valued pre-(quasi-)grouping functions are proposed. Finally, the concept of (IG,IN) -interval-valued directional monotonic fuzzy implications and QL -interval-valued directional monotonic operations are introduced on the basis of interval-valued pre-(quasi-)grouping functions IG , interval-valued overlap functions IO and interval-valued fuzzy negations IN. In addition, related studies were conducted on the basic properties of (IG,IN) -interval-valued directional monotonic fuzzy implications and QL -interval-valued directional monotonic operations.
{"title":"Interval-valued pre-(quasi-)grouping functions and its application in constructing interval-valued directional monotonic fuzzy implications","authors":"Peng Yu, Huxiong Song, Hui Liu","doi":"10.3233/jifs-233318","DOIUrl":"https://doi.org/10.3233/jifs-233318","url":null,"abstract":"How to expand the variable domain and monotonicity of aggregation functions to generate new aggregation functions is an important research content in aggregation functions. In this work, the concept of interval-valued pre-(quasi-)grouping functions is given by relaxing the interval monotonicity ofinterval-valued (quasi-)grouping functions to interval directional monotonicity. Then, some basic properties of interval-valued pre-(quasi-)grouping functions and the relationship between interval-valued pre-(quasi-)grouping functions and pre-(quasi-)grouping functions are presented. Accordingly, several construction methods of interval-valued pre-(quasi-)grouping functions are proposed. Finally, the concept of (IG,IN)\u0000-interval-valued directional monotonic fuzzy implications and QL\u0000-interval-valued directional monotonic operations are introduced on the basis of interval-valued pre-(quasi-)grouping functions IG\u0000, interval-valued overlap functions IO and interval-valued fuzzy negations IN. In addition, related studies were conducted on the basic properties of (IG,IN)\u0000-interval-valued directional monotonic fuzzy implications and QL\u0000-interval-valued directional monotonic operations.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"121 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140197204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Vidya, Veeraraghavan Jagannathan, T. Guhan, Jogendra Kumar
Rainfall forecasting is essential because heavy and irregular rainfall creates many impacts like destruction of crops and farms. Here, the occurrence of rainfall is highly related to atmospheric parameters. Thus, a better forecasting model is essential for an early warning that can minimize risks and manage the agricultural farms in a better way. In this manuscript, Deep Neural Network (DNN) optimized with Flamingo Search Optimization Algorithm (FSOA) is proposed for Long-term and Short-term Rainfall forecasting. Here, the rainfall data is obtained from the standard dataset as Sudheerachary India Rainfall Analysis (IRA). Moreover, the Morphological filtering and Extended Empirical wavelet transformation (MFEEWT) approach is utilized for pre-processing process. Also, the deep neural network is utilized for performing rainfall prediction and classification. Additionally, the parameters of the DNN model is optimizing by Flamingo Search Optimization Algorithm. Finally, the proposed MFEEWT-DNN- FSOA approach has effectively predict the rainfall in different locations around India. The proposed model is implemented in Python tool and the performance metrics are calculated. The proposed MFEEWT-DNN- FSOA approach has achieved 25%, 26%, 25.5% high accuracy and 35.8%, 24.7%, 15.9% lower error rate for forecasting rainfall in Cannur at Kerala than the existing Map-Reduce based Exponential Smoothing Technology for rainfall prediction (MR-EST-RP), modular artificial neural networks with support vector regression for rainfall prediction (MANN-SVR-RP), and biogeography-based extreme learning machine (BBO-ELM) (BBO-ELM-RP) methods respectively.
降雨预报是必不可少的,因为强降雨和不规则降雨会造成许多影响,比如破坏庄稼和农场。在这里,降雨的发生与大气参数高度相关。因此,一个更好的预测模型对于早期预警至关重要,可以最大限度地降低风险并更好地管理农场。本文提出了基于火烈鸟搜索优化算法(FSOA)优化的深度神经网络(DNN)用于长期和短期降雨预报。这里的降雨数据来自Sudheerachary India rainfall Analysis (IRA)的标准数据集。利用形态滤波和扩展经验小波变换(MFEEWT)方法进行预处理。同时,利用深度神经网络进行降雨预测和分类。此外,采用火烈鸟搜索优化算法对DNN模型的参数进行优化。最后,提出的MFEEWT-DNN- FSOA方法有效地预测了印度不同地点的降雨量。在Python工具中实现了所提出的模型,并计算了性能指标。MFEEWT-DNN- FSOA预测喀拉拉邦卡纳尔邦降雨的准确率分别比现有的基于Map-Reduce的指数平滑预测技术(MR-EST-RP)、支持向量回归的模块化人工神经网络(MANN-SVR-RP)和基于生物地理的极限学习机(BBO-ELM- rp)方法提高了25%、26%、25.5%,错误率分别降低了35.8%、24.7%、15.9%。
{"title":"Long-term and short-term rainfall forecasting using deep neural network optimized with flamingo search optimization algorithm","authors":"S. Vidya, Veeraraghavan Jagannathan, T. Guhan, Jogendra Kumar","doi":"10.3233/jifs-235798","DOIUrl":"https://doi.org/10.3233/jifs-235798","url":null,"abstract":"Rainfall forecasting is essential because heavy and irregular rainfall creates many impacts like destruction of crops and farms. Here, the occurrence of rainfall is highly related to atmospheric parameters. Thus, a better forecasting model is essential for an early warning that can minimize risks and manage the agricultural farms in a better way. In this manuscript, Deep Neural Network (DNN) optimized with Flamingo Search Optimization Algorithm (FSOA) is proposed for Long-term and Short-term Rainfall forecasting. Here, the rainfall data is obtained from the standard dataset as Sudheerachary India Rainfall Analysis (IRA). Moreover, the Morphological filtering and Extended Empirical wavelet transformation (MFEEWT) approach is utilized for pre-processing process. Also, the deep neural network is utilized for performing rainfall prediction and classification. Additionally, the parameters of the DNN model is optimizing by Flamingo Search Optimization Algorithm. Finally, the proposed MFEEWT-DNN- FSOA approach has effectively predict the rainfall in different locations around India. The proposed model is implemented in Python tool and the performance metrics are calculated. The proposed MFEEWT-DNN- FSOA approach has achieved 25%, 26%, 25.5% high accuracy and 35.8%, 24.7%, 15.9% lower error rate for forecasting rainfall in Cannur at Kerala than the existing Map-Reduce based Exponential Smoothing Technology for rainfall prediction (MR-EST-RP), modular artificial neural networks with support vector regression for rainfall prediction (MANN-SVR-RP), and biogeography-based extreme learning machine (BBO-ELM) (BBO-ELM-RP) methods respectively.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"120 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135136368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The research tends to suggest a spin-orbit torque magnetic random access memory (SOT-MRAM)-based Binary CNN In-Memory Accelerator (BIMA) to minimize power utilization and suggests an In-Memory Computing (IMC) for AdderNet-based BIMA to further enhance performance by fully utilizing the benefits of IMC as well as a low current consumption configuration employing SOT-MRAM. And recommended an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level. Additionally, the suggested sense amplifier is not only capable of the addition operation but also typical Boolean operations including subtraction etc. The architecture suggested in this research consumes less power than its spin-orbit torque (STT) MRAM and resistive random access memory (ReRAM)-based counterparts in the Modified National Institute of Standards and Technology (MNIST) data set, according to simulation results. Based to evaluation outcomes, the pre-sented strategy outperforms the in-memory accelerator in terms of speedup and energy efficiency by 17.13× and 18.20×, respectively.
{"title":"Spin orbit magnetic random access memory based binary CNN in-memory accelerator (BIMA) with sense amplifier","authors":"K. Kalaichelvi, M. Sundaram, P. Sanmugavalli","doi":"10.3233/jifs-223898","DOIUrl":"https://doi.org/10.3233/jifs-223898","url":null,"abstract":"The research tends to suggest a spin-orbit torque magnetic random access memory (SOT-MRAM)-based Binary CNN In-Memory Accelerator (BIMA) to minimize power utilization and suggests an In-Memory Computing (IMC) for AdderNet-based BIMA to further enhance performance by fully utilizing the benefits of IMC as well as a low current consumption configuration employing SOT-MRAM. And recommended an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level. Additionally, the suggested sense amplifier is not only capable of the addition operation but also typical Boolean operations including subtraction etc. The architecture suggested in this research consumes less power than its spin-orbit torque (STT) MRAM and resistive random access memory (ReRAM)-based counterparts in the Modified National Institute of Standards and Technology (MNIST) data set, according to simulation results. Based to evaluation outcomes, the pre-sented strategy outperforms the in-memory accelerator in terms of speedup and energy efficiency by 17.13× and 18.20×, respectively.","PeriodicalId":54795,"journal":{"name":"Journal of Intelligent & Fuzzy Systems","volume":"119 23","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135137220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}