Electrical device automation in smart industries assimilates machines, electronic circuits, and control systems for efficient operations. The automated controls provide human intervention and fewer operations through proportional-integral-derivative (PID) controllers. Considering these devices’ operational and control loop contributions, this article introduces an Override-Controlled Definitive Performance Scheme (OCDPS). This scheme focuses on confining machine operations within the allocated time intervals preventing loop failures. The control value for multiple electrical machines is estimated based on the operational load and time for preventing failures. The override cases use predictive learning that incorporates the previous operational logs. Considering the override prediction, the control value is adjusted independently for different devices for confining variation loops. The automation features are programmed as before and after loop failures to cease further operational overrides in this process. Predictive learning independently identifies the possibilities in override and machine failures for increasing efficacy. The proposed method is contrasted with previously established models including the ILC, ASLP, and TD3. This evaluation considers the parameters of uptime, errors, override time, productivity, and prediction accuracy. Loops in operations and typical running times are two examples of the variables. The learning process results are utilized to estimate efficiency by modifying the operating time and loop consistencies with the help of control values. To avoid unscheduled downtime, the discovered loop failures modify the control parameters of individual machine processes.
{"title":"Intelligent control technology of engineering electrical automation for PID algorithm","authors":"Meng Niu","doi":"10.3233/idt-230125","DOIUrl":"https://doi.org/10.3233/idt-230125","url":null,"abstract":"Electrical device automation in smart industries assimilates machines, electronic circuits, and control systems for efficient operations. The automated controls provide human intervention and fewer operations through proportional-integral-derivative (PID) controllers. Considering these devices’ operational and control loop contributions, this article introduces an Override-Controlled Definitive Performance Scheme (OCDPS). This scheme focuses on confining machine operations within the allocated time intervals preventing loop failures. The control value for multiple electrical machines is estimated based on the operational load and time for preventing failures. The override cases use predictive learning that incorporates the previous operational logs. Considering the override prediction, the control value is adjusted independently for different devices for confining variation loops. The automation features are programmed as before and after loop failures to cease further operational overrides in this process. Predictive learning independently identifies the possibilities in override and machine failures for increasing efficacy. The proposed method is contrasted with previously established models including the ILC, ASLP, and TD3. This evaluation considers the parameters of uptime, errors, override time, productivity, and prediction accuracy. Loops in operations and typical running times are two examples of the variables. The learning process results are utilized to estimate efficiency by modifying the operating time and loop consistencies with the help of control values. To avoid unscheduled downtime, the discovered loop failures modify the control parameters of individual machine processes.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84031041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software defect prediction models are used for predicting high risk software components. Feature selection has significant impact on the prediction performance of the software defect prediction models since redundant and unimportant features make the prediction model more difficult to learn. Ensemble feature selection has recently emerged as a new methodology for enhancing feature selection performance. This paper proposes a new multi-criteria-decision-making (MCDM) based ensemble feature selection (EFS) method. This new method is termed as MCDM-EFS. The proposed method, MCDM-EFS, first generates the decision matrix signifying the feature’s importance score with respect to various existing feature selection methods. Next, the decision matrix is used as the input to well-known MCDM method TOPSIS for assigning a final rank to each feature. The proposed approach is validated by an experimental study for predicting software defects using two classifiers K-nearest neighbor (KNN) and naïve bayes (NB) over five open-source datasets. The predictive performance of the proposed approach is compared with existing feature selection algorithms. Two evaluation metrics – nMCC and G-measure are used to compare predictive performance. The experimental results show that the MCDM-EFS significantly improves the predictive performance of software defect prediction models against other feature selection methods in terms of nMCC as well as G-measure.
{"title":"MCDM-EFS: A novel ensemble feature selection method for software defect prediction using multi-criteria decision making","authors":"Kamaldeep Kaur, Ajay Mahaputra Kumar","doi":"10.3233/idt-230251","DOIUrl":"https://doi.org/10.3233/idt-230251","url":null,"abstract":"Software defect prediction models are used for predicting high risk software components. Feature selection has significant impact on the prediction performance of the software defect prediction models since redundant and unimportant features make the prediction model more difficult to learn. Ensemble feature selection has recently emerged as a new methodology for enhancing feature selection performance. This paper proposes a new multi-criteria-decision-making (MCDM) based ensemble feature selection (EFS) method. This new method is termed as MCDM-EFS. The proposed method, MCDM-EFS, first generates the decision matrix signifying the feature’s importance score with respect to various existing feature selection methods. Next, the decision matrix is used as the input to well-known MCDM method TOPSIS for assigning a final rank to each feature. The proposed approach is validated by an experimental study for predicting software defects using two classifiers K-nearest neighbor (KNN) and naïve bayes (NB) over five open-source datasets. The predictive performance of the proposed approach is compared with existing feature selection algorithms. Two evaluation metrics – nMCC and G-measure are used to compare predictive performance. The experimental results show that the MCDM-EFS significantly improves the predictive performance of software defect prediction models against other feature selection methods in terms of nMCC as well as G-measure.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88238653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to solve the problem of low accuracy of evaluation results caused by the impact of throughput and transmission delay on traditional systems in 6G networks, this paper proposes a design method of network security processing system in 5G/6gNG-DSS of intelligent model computer. Supported by the principle of active defense, this paper designs a server-side structure, using ScanHome SH-800/400 embedded scanning module barcode QR code scanning device as the scanning engine. We put an evaluation device on the RISC chip PA-RISC microprocessor. Once the system fails, it will send an early warning signal. Through setting control, data, and cooperation interfaces, it can support the information exchange between subsystems. The higher pulse width modulator TL494:4 pin is used to design the power source. We use the top-down data management method to design the system software flow, build a mathematical model, introduce network entropy to weigh the benefits, and realize the system security evaluation. The experimental results show that the highest evaluation accuracy of the system can reach 98%, which can ensure user information security. Conclusion: The problem of active defense network security is transformed into a dynamic analysis problem, which provides an effective decision-making scheme for managers. The system evaluation based on Packet Tracer software has high accuracy and provides important decisions for network security analysis.
{"title":"Design of network security processing system in 5G/6gNG-DSS of intelligent model computer","authors":"Bo Wei, Huanying Chen, Zhaoji Huang","doi":"10.3233/idt-230143","DOIUrl":"https://doi.org/10.3233/idt-230143","url":null,"abstract":"In order to solve the problem of low accuracy of evaluation results caused by the impact of throughput and transmission delay on traditional systems in 6G networks, this paper proposes a design method of network security processing system in 5G/6gNG-DSS of intelligent model computer. Supported by the principle of active defense, this paper designs a server-side structure, using ScanHome SH-800/400 embedded scanning module barcode QR code scanning device as the scanning engine. We put an evaluation device on the RISC chip PA-RISC microprocessor. Once the system fails, it will send an early warning signal. Through setting control, data, and cooperation interfaces, it can support the information exchange between subsystems. The higher pulse width modulator TL494:4 pin is used to design the power source. We use the top-down data management method to design the system software flow, build a mathematical model, introduce network entropy to weigh the benefits, and realize the system security evaluation. The experimental results show that the highest evaluation accuracy of the system can reach 98%, which can ensure user information security. Conclusion: The problem of active defense network security is transformed into a dynamic analysis problem, which provides an effective decision-making scheme for managers. The system evaluation based on Packet Tracer software has high accuracy and provides important decisions for network security analysis.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88279657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the lack of data security protection, a large number of malicious information leaks, which makes building information security (InfoSec) issues more and more attention. The construction information involves a large number of participants, and the number of construction project files is huge, leading to a huge amount of information. However, traditional network security information protection software is mostly passive, which is difficult to enhance its autonomy. Therefore, this text introduced data sharing algorithm in building InfoSec management. This text proposed an Attribute Based Encryption (ABE) algorithm based on data sharing, which is simple in calculation and strong in encrypting attributes. This algorithm was added to the building InfoSec management system (ISMS) designed in this text, which not only reduces the burden of relevant personnel, but also has flexible control and high security. The experimental results showed that when 10 users logged in to the system, the stability and security of the system designed in this text were 87% and 91% respectively. When 20 users logged in to the system, the system stability and security designed in this text were 89% and 92% respectively. When 80 users logged in to the system, the system stability and security designed in this text were 94% and 95% respectively. It can be found that the stability and security of the system have reached a high level, which can ensure the security of effective management of building information.
由于缺乏数据安全保护,大量恶意信息泄露,这使得楼宇信息安全(InfoSec)问题越来越受到重视。施工信息涉及的参与方众多,施工项目文件数量庞大,导致信息量巨大。然而,传统的网络安全信息保护软件大多是被动的,难以增强其自主性。因此,本文介绍了数据共享算法在构建信息安全管理中的应用。本文提出了一种基于数据共享的基于属性的加密(Attribute Based Encryption, ABE)算法,该算法计算简单,属性加密能力强。将该算法加入到本文设计的楼宇信息安全管理系统(ISMS)中,不仅减轻了相关人员的负担,而且控制灵活,安全性高。实验结果表明,当10个用户登录系统时,本文设计的系统稳定性为87%,安全性为91%。当有20个用户登录系统时,本文设计的系统稳定性为89%,安全性为92%。在80个用户登录系统时,本文设计的系统稳定性为94%,安全性为95%。可以发现,系统的稳定性和安全性都达到了较高的水平,可以保证建筑信息有效管理的安全性。
{"title":"Construction information security management system based on data sharing algorithm","authors":"Lihui Zhao","doi":"10.3233/idt-230144","DOIUrl":"https://doi.org/10.3233/idt-230144","url":null,"abstract":"Due to the lack of data security protection, a large number of malicious information leaks, which makes building information security (InfoSec) issues more and more attention. The construction information involves a large number of participants, and the number of construction project files is huge, leading to a huge amount of information. However, traditional network security information protection software is mostly passive, which is difficult to enhance its autonomy. Therefore, this text introduced data sharing algorithm in building InfoSec management. This text proposed an Attribute Based Encryption (ABE) algorithm based on data sharing, which is simple in calculation and strong in encrypting attributes. This algorithm was added to the building InfoSec management system (ISMS) designed in this text, which not only reduces the burden of relevant personnel, but also has flexible control and high security. The experimental results showed that when 10 users logged in to the system, the stability and security of the system designed in this text were 87% and 91% respectively. When 20 users logged in to the system, the system stability and security designed in this text were 89% and 92% respectively. When 80 users logged in to the system, the system stability and security designed in this text were 94% and 95% respectively. It can be found that the stability and security of the system have reached a high level, which can ensure the security of effective management of building information.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81501916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most common asymptomatic arrhythmia that significantly leads to death and morbidity is Atrial Fibrillation (AF). It has the ability to extract valuable features is necessary for AF identification. Still, many existing studies have relied on weak frequencies that, are Time-Frequency Energy (TFE) and shallow time features. It requires lengthy ECG data to confine the information and is unable to confine the slight variation affected by the previous AF. The interfering noise signals focus primarily on separating AF from signals with a Sinus Rhythm (SR). Thus, this study would explore the detection of AF with heuristic-assisted deep learning approaches. Initially, the ECG Signals are gathered from the standard resources. Next, these gathered signals are pre-processed to perform denoising and artifact removal for enhancing the quality of data for further processes. Then, the deep feature extraction is done in two phases. In the first phase, the RR interval is extracted from the pre-processing ECG signals and the deep features are removed utilizing a Convolutional Neural Network (CNN). In contrast, deep features are employed to extract the features from the pre-processed ECG signals using the same CNN in the second phase. Then, these gathered in-depth features are fused and fed to the newly suggested heuristic algorithm called Enhanced Average and Subtraction-Based Optimizer (E-ASBO) for selecting the optimal fused features for reducing the redundancy in the signals. Finally, the chosen optimal fused features are fed to the new Adaptive Ensemble Neural Network (AENN) with heuristic adoption with the techniques such as Elma Neural Network, Deep Neural Network (DNN), and Recurrent Neural Network (RNN). This model focuses on increasing the accuracy of detecting AF. These proposed networks have more significant potential in future AF screening or clinical computer-aided AF diagnosis in wearable devices. It has superior performance and intuitive nature compared to the existing works.
{"title":"Optimal fused feature selection with ensemble learning foratrial fibrillation detection using ECG with enhanced average and subtraction-based optimizer","authors":"Sanjib Kumar Dhara, Nilankar Bhanja, Prabodh Khampariya","doi":"10.3233/idt-220130","DOIUrl":"https://doi.org/10.3233/idt-220130","url":null,"abstract":"Most common asymptomatic arrhythmia that significantly leads to death and morbidity is Atrial Fibrillation (AF). It has the ability to extract valuable features is necessary for AF identification. Still, many existing studies have relied on weak frequencies that, are Time-Frequency Energy (TFE) and shallow time features. It requires lengthy ECG data to confine the information and is unable to confine the slight variation affected by the previous AF. The interfering noise signals focus primarily on separating AF from signals with a Sinus Rhythm (SR). Thus, this study would explore the detection of AF with heuristic-assisted deep learning approaches. Initially, the ECG Signals are gathered from the standard resources. Next, these gathered signals are pre-processed to perform denoising and artifact removal for enhancing the quality of data for further processes. Then, the deep feature extraction is done in two phases. In the first phase, the RR interval is extracted from the pre-processing ECG signals and the deep features are removed utilizing a Convolutional Neural Network (CNN). In contrast, deep features are employed to extract the features from the pre-processed ECG signals using the same CNN in the second phase. Then, these gathered in-depth features are fused and fed to the newly suggested heuristic algorithm called Enhanced Average and Subtraction-Based Optimizer (E-ASBO) for selecting the optimal fused features for reducing the redundancy in the signals. Finally, the chosen optimal fused features are fed to the new Adaptive Ensemble Neural Network (AENN) with heuristic adoption with the techniques such as Elma Neural Network, Deep Neural Network (DNN), and Recurrent Neural Network (RNN). This model focuses on increasing the accuracy of detecting AF. These proposed networks have more significant potential in future AF screening or clinical computer-aided AF diagnosis in wearable devices. It has superior performance and intuitive nature compared to the existing works.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89936286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. K. Chowdary, E. A. Priya, D. Dănciulescu, J. Anitha, D. Hemanth
Emotion recognition is one of the most important components of human-computer interaction, and it is something that can be performed with the use of voice signals. It is not possible to optimise the process of feature extraction as well as the classification process at the same time while utilising conventional approaches. Research is increasingly focusing on many different types of “deep learning” in an effort to discover a solution to these difficulties. In today’s modern world, the practise of applying deep learning algorithms to categorization problems is becoming increasingly important. However, the advantages available in one model is not available in another model. This limits the practical feasibility of such approaches. The main objective of this work is to explore the possibility of hybrid deep learning models for speech signal-based emotion identification. Two methods are explored in this work: CNN and CNN-LSTM. The first model is the conventional one and the second is the hybrid model. TESS database is used for the experiments and the results are analysed in terms of various accuracy measures. An average accuracy of 97% for CNN and 98% for CNN-LSTM is achieved with these models.
{"title":"Hybrid deep learning models based emotion recognition with speech signals","authors":"M. K. Chowdary, E. A. Priya, D. Dănciulescu, J. Anitha, D. Hemanth","doi":"10.3233/idt-230216","DOIUrl":"https://doi.org/10.3233/idt-230216","url":null,"abstract":"Emotion recognition is one of the most important components of human-computer interaction, and it is something that can be performed with the use of voice signals. It is not possible to optimise the process of feature extraction as well as the classification process at the same time while utilising conventional approaches. Research is increasingly focusing on many different types of “deep learning” in an effort to discover a solution to these difficulties. In today’s modern world, the practise of applying deep learning algorithms to categorization problems is becoming increasingly important. However, the advantages available in one model is not available in another model. This limits the practical feasibility of such approaches. The main objective of this work is to explore the possibility of hybrid deep learning models for speech signal-based emotion identification. Two methods are explored in this work: CNN and CNN-LSTM. The first model is the conventional one and the second is the hybrid model. TESS database is used for the experiments and the results are analysed in terms of various accuracy measures. An average accuracy of 97% for CNN and 98% for CNN-LSTM is achieved with these models.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86987677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning (DL) is the basis of many applications of artificial intelligence (AI), and cloud service is the main way of modern computer capabilities. DL functions provided by cloud services have attracted great attention. At present, the application of AI in various fields of life is gradually playing an important role, and the demand and enthusiasm of governments at all levels for building AI computing capacity are also growing. The AI logic evaluation process is often based on complex algorithms that use or generate large amounts of data. Due to the higher requirements for the data processing and storage capacity of the device itself, which are often not fully realized by humans because the current data processing technology and information storage technology are relatively backward, this has become an obstacle to the further development of AI cloud services. Therefore, this paper has studied the requirements and objectives of the cloud service system under AI by analyzing the operation characteristics, service mode and current situation of DL, constructed design principles according to its requirements, and finally designed and implemented a cloud service system, thereby improving the algorithm scheduling quality of the cloud service system. The data processing capacity, resource allocation capacity and security management capacity of the AI cloud service system were superior to the original cloud service system. Among them, the data processing capacity of AI cloud service system was 7.3% higher than the original cloud service system; the resource allocation capacity of AI cloud service system was 6.7% higher than the original cloud service system; the security management capacity of AI cloud service system was 8.9% higher than the original cloud service system. In conclusion, DL plays an important role in the construction of AI cloud service system.
{"title":"System construction of deep learning AI cloud service mode","authors":"Chunhua Lin","doi":"10.3233/idt-230150","DOIUrl":"https://doi.org/10.3233/idt-230150","url":null,"abstract":"Deep learning (DL) is the basis of many applications of artificial intelligence (AI), and cloud service is the main way of modern computer capabilities. DL functions provided by cloud services have attracted great attention. At present, the application of AI in various fields of life is gradually playing an important role, and the demand and enthusiasm of governments at all levels for building AI computing capacity are also growing. The AI logic evaluation process is often based on complex algorithms that use or generate large amounts of data. Due to the higher requirements for the data processing and storage capacity of the device itself, which are often not fully realized by humans because the current data processing technology and information storage technology are relatively backward, this has become an obstacle to the further development of AI cloud services. Therefore, this paper has studied the requirements and objectives of the cloud service system under AI by analyzing the operation characteristics, service mode and current situation of DL, constructed design principles according to its requirements, and finally designed and implemented a cloud service system, thereby improving the algorithm scheduling quality of the cloud service system. The data processing capacity, resource allocation capacity and security management capacity of the AI cloud service system were superior to the original cloud service system. Among them, the data processing capacity of AI cloud service system was 7.3% higher than the original cloud service system; the resource allocation capacity of AI cloud service system was 6.7% higher than the original cloud service system; the security management capacity of AI cloud service system was 8.9% higher than the original cloud service system. In conclusion, DL plays an important role in the construction of AI cloud service system.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75269063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of the economy, the demand for electric power is increasing, and the operation quality of the power system directly affects the quality of people’s production and life. The electric energy provided by the electric power system is the foundation of social operation. Through continuous optimization of the functions of the electric power system, the efficiency of social operation can be improved, and economic benefits can be continuously created, thereby promoting social progress and people’s quality of life. In the power system, the responsibility of the power distribution network (PDN) is to transmit electricity to all parts of the country, and its transmission efficiency would directly affect the operational efficiency of the power system. PDN scheduling plays an important role in improving power supply reliability, optimizing resource allocation, reducing energy waste, and reducing environmental pollution. It is of great significance for promoting social and economic development and environmental protection. However, in the PDN scheduling, due to the inflexibility of the power system scheduling, it leads to the loss and waste of electric energy. Therefore, it is necessary to upgrade the operation of the PDN automatically and use automation technology to improve the operational efficiency and energy utilization rate of the power system. This article optimized the energy-saving management of PDN dispatching through electrical automation technology. The algorithm proposed in this paper was a distribution scheduling algorithm based on electrical automation technology. Through this algorithm, real-time monitoring, analysis, and scheduling of PDNs can be achieved, thereby improving the efficiency and reliability of distribution systems and reducing energy consumption. The experimental results showed that before using the distribution scheduling algorithm based on electrical automation technology, the high loss distribution to transformation ratios of power distribution stations in the first to fourth quarters were 21.93%, 22.95%, 23.61%, and 22.47%, respectively. After using the distribution scheduling algorithm, the high loss distribution to transformation ratios for the four quarters were 15.75%, 13.81%, 14.77%, and 13.12%, respectively. This showed that the algorithm can reduce the high loss distribution to transformation ratio of power distribution stations and reduce their distribution losses, which saved electric energy. The research results of this article indicated that electrical automation technology can play an excellent role in the field of PDN scheduling, which optimized the energy-saving management technology of PDN scheduling, indicating an advanced development direction for intelligent management of PDN scheduling.
{"title":"Energy saving management technology for electrical automation and power distribution network dispatching","authors":"Zhenyuan Zhang","doi":"10.3233/idt-230121","DOIUrl":"https://doi.org/10.3233/idt-230121","url":null,"abstract":"With the rapid development of the economy, the demand for electric power is increasing, and the operation quality of the power system directly affects the quality of people’s production and life. The electric energy provided by the electric power system is the foundation of social operation. Through continuous optimization of the functions of the electric power system, the efficiency of social operation can be improved, and economic benefits can be continuously created, thereby promoting social progress and people’s quality of life. In the power system, the responsibility of the power distribution network (PDN) is to transmit electricity to all parts of the country, and its transmission efficiency would directly affect the operational efficiency of the power system. PDN scheduling plays an important role in improving power supply reliability, optimizing resource allocation, reducing energy waste, and reducing environmental pollution. It is of great significance for promoting social and economic development and environmental protection. However, in the PDN scheduling, due to the inflexibility of the power system scheduling, it leads to the loss and waste of electric energy. Therefore, it is necessary to upgrade the operation of the PDN automatically and use automation technology to improve the operational efficiency and energy utilization rate of the power system. This article optimized the energy-saving management of PDN dispatching through electrical automation technology. The algorithm proposed in this paper was a distribution scheduling algorithm based on electrical automation technology. Through this algorithm, real-time monitoring, analysis, and scheduling of PDNs can be achieved, thereby improving the efficiency and reliability of distribution systems and reducing energy consumption. The experimental results showed that before using the distribution scheduling algorithm based on electrical automation technology, the high loss distribution to transformation ratios of power distribution stations in the first to fourth quarters were 21.93%, 22.95%, 23.61%, and 22.47%, respectively. After using the distribution scheduling algorithm, the high loss distribution to transformation ratios for the four quarters were 15.75%, 13.81%, 14.77%, and 13.12%, respectively. This showed that the algorithm can reduce the high loss distribution to transformation ratio of power distribution stations and reduce their distribution losses, which saved electric energy. The research results of this article indicated that electrical automation technology can play an excellent role in the field of PDN scheduling, which optimized the energy-saving management technology of PDN scheduling, indicating an advanced development direction for intelligent management of PDN scheduling.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84778949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The greatest challenge for healthcare in drug repositioning and discovery is identifying interactions between known drugs and targets. Experimental methods can reveal some drug-target interactions (DTI) but identifying all of them is an expensive and time-consuming endeavor. Machine learning-based algorithms currently cover the DTI prediction problem as a binary classification problem. However, the performance of the DTI prediction is negatively impacted by the lack of experimentally validated negative samples due to an imbalanced class distribution. Hence recasting the DTI prediction task as a regression problem may be one way to solve this problem. This paper proposes a novel convolutional neural network with an attention-based bidirectional long short-term memory (CNN-AttBiLSTM), a new deep-learning hybrid model for predicting drug-target binding affinities. Secondly, it can be arduous and time-intensive to tune the hyperparameters of a CNN-AttBiLSTM hybrid model to augment its performance. To tackle this issue, we suggested a Memetic Particle Swarm Optimization (MPSOA) algorithm, for ascertaining the best settings for the proposed model. According to experimental results, the suggested MPSOA-based CNN- Att-BiLSTM model outperforms baseline techniques with a 0.90 concordance index and 0.228 mean square error in DAVIS dataset, and 0.97 concordance index and 0.010 mean square error in the KIBA dataset.
在药物重新定位和发现方面,医疗保健面临的最大挑战是确定已知药物和靶标之间的相互作用。实验方法可以揭示一些药物-靶标相互作用(DTI),但确定所有这些相互作用是一项昂贵且耗时的工作。基于机器学习的算法目前将DTI预测问题作为二值分类问题来处理。然而,由于类分布不平衡,缺乏实验验证的负样本,会对DTI预测的性能产生负面影响。因此,将DTI预测任务重新转换为回归问题可能是解决这个问题的一种方法。本文提出了一种基于注意的双向长短期记忆卷积神经网络(CNN-AttBiLSTM),这是一种预测药物与靶标结合亲和力的新型深度学习混合模型。其次,调整CNN-AttBiLSTM混合模型的超参数以增强其性能是一项艰巨且耗时的工作。为了解决这个问题,我们提出了一种模因粒子群优化(Memetic Particle Swarm Optimization, MPSOA)算法,用于确定所提出模型的最佳设置。实验结果表明,基于mpsoa的CNN- at - bilstm模型在DAVIS数据集上的一致性指数为0.90,均方误差为0.228,在KIBA数据集上的一致性指数为0.97,均方误差为0.010,优于基线技术。
{"title":"Bio-inspired algorithm-based hyperparameter tuning for drug-target binding affinity prediction in healthcare","authors":"Moolchand Sharma, S. Deswal","doi":"10.3233/idt-230145","DOIUrl":"https://doi.org/10.3233/idt-230145","url":null,"abstract":"The greatest challenge for healthcare in drug repositioning and discovery is identifying interactions between known drugs and targets. Experimental methods can reveal some drug-target interactions (DTI) but identifying all of them is an expensive and time-consuming endeavor. Machine learning-based algorithms currently cover the DTI prediction problem as a binary classification problem. However, the performance of the DTI prediction is negatively impacted by the lack of experimentally validated negative samples due to an imbalanced class distribution. Hence recasting the DTI prediction task as a regression problem may be one way to solve this problem. This paper proposes a novel convolutional neural network with an attention-based bidirectional long short-term memory (CNN-AttBiLSTM), a new deep-learning hybrid model for predicting drug-target binding affinities. Secondly, it can be arduous and time-intensive to tune the hyperparameters of a CNN-AttBiLSTM hybrid model to augment its performance. To tackle this issue, we suggested a Memetic Particle Swarm Optimization (MPSOA) algorithm, for ascertaining the best settings for the proposed model. According to experimental results, the suggested MPSOA-based CNN- Att-BiLSTM model outperforms baseline techniques with a 0.90 concordance index and 0.228 mean square error in DAVIS dataset, and 0.97 concordance index and 0.010 mean square error in the KIBA dataset.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87562784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lung cancer is one of the dangerous diseases that cause shortness of breath and death. Automatic lung cancer disease identification is a challenging operation for researchers. This paper, presents an effective lung cancer diagnosis system using deep learning with CT images. It also decreases lung cancer’s misclassification. Initially, the input images are gathered from online resources. The collected CT images are given to the detection stage. Here, we perform the detection using a Multi Serial Hybrid convolution based Residual Attention Network (MSHCRAN). Using a deep learning framework lung cancer detection using CT images is effectively detected. The performance of the developed lung cancer detection system is compared to other conventional lung cancer detection models According to the analysis, the implemented deep learning-based detection of lung cancer system had a precision higher than 95.75% compared to CNN with 90.04%, ResNet with 89.62%, LSTM with 92%, and CRAN with 93.4% using dataset-1. The analysis with Dataset-2 shows a precision of 90.43% with CNN, ResNet with 90.12%, LSTM with 92%, and CRAN with 93.7%, with the proposed method precision of 95.8%.
{"title":"Residual attention network based hybrid convolution network model for lung cancer detection","authors":"P. Balaji, Dr Rajanikanth Aluvalu, Kalpna Sagar","doi":"10.3233/idt-230142","DOIUrl":"https://doi.org/10.3233/idt-230142","url":null,"abstract":"Lung cancer is one of the dangerous diseases that cause shortness of breath and death. Automatic lung cancer disease identification is a challenging operation for researchers. This paper, presents an effective lung cancer diagnosis system using deep learning with CT images. It also decreases lung cancer’s misclassification. Initially, the input images are gathered from online resources. The collected CT images are given to the detection stage. Here, we perform the detection using a Multi Serial Hybrid convolution based Residual Attention Network (MSHCRAN). Using a deep learning framework lung cancer detection using CT images is effectively detected. The performance of the developed lung cancer detection system is compared to other conventional lung cancer detection models According to the analysis, the implemented deep learning-based detection of lung cancer system had a precision higher than 95.75% compared to CNN with 90.04%, ResNet with 89.62%, LSTM with 92%, and CRAN with 93.4% using dataset-1. The analysis with Dataset-2 shows a precision of 90.43% with CNN, ResNet with 90.12%, LSTM with 92%, and CRAN with 93.7%, with the proposed method precision of 95.8%.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90893469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}