Due to the global COVID-19 pandemic, distinct medicines have been developed for treating the coronavirus disease (COVID). However, predicting and identifying potential adverse reactions to these medicines face significant challenges in producing effective COVID medication. Accurate prediction of adverse reactions to COVID medications is crucial for ensuring patient safety and medicine success. Recent advancements in computational models used in pharmaceutical production have opened up new possibilities for detecting such adverse reactions. Due to the urgent need for effective COVID medication development, this research presents a multi-label Inceptionv3 and long short-term memory methodology for COVID (Inceptionv3-LSTM-COV) medicine development. The presented experimental evaluations were conducted using the chemical conformer image of COVID medicine. The features of the chemical conformer are denoted utilizing the RGB color channel, which is extracted using Inceptionv3, GlobalAveragePooling2D, and long short-term memory (LSTM) layers. The results demonstrate that the efficiency of the Inceptionv3-LSTM-COV model outperformed the previous study's performance and achieved better results compared to MLCNN-COV, Inceptionv3, ResNet50, MobileNetv2, VGG19, and DenseNet201 models. The proposed model reported the highest accuracy value of 99.19% in predicting adverse reactions to COVID medicine.
{"title":"Inceptionv3-LSTM-COV: A multi-label framework for identifying adverse reactions to COVID medicine from chemical conformers based on Inceptionv3 and long short-term memory","authors":"Pranab Das, Dilwar Hussain Mazumder","doi":"10.4218/etrij.2023-0288","DOIUrl":"https://doi.org/10.4218/etrij.2023-0288","url":null,"abstract":"Due to the global COVID-19 pandemic, distinct medicines have been developed for treating the coronavirus disease (COVID). However, predicting and identifying potential adverse reactions to these medicines face significant challenges in producing effective COVID medication. Accurate prediction of adverse reactions to COVID medications is crucial for ensuring patient safety and medicine success. Recent advancements in computational models used in pharmaceutical production have opened up new possibilities for detecting such adverse reactions. Due to the urgent need for effective COVID medication development, this research presents a multi-label Inceptionv3 and long short-term memory methodology for COVID (Inceptionv3-LSTM-COV) medicine development. The presented experimental evaluations were conducted using the chemical conformer image of COVID medicine. The features of the chemical conformer are denoted utilizing the RGB color channel, which is extracted using Inceptionv3, GlobalAveragePooling2D, and long short-term memory (LSTM) layers. The results demonstrate that the efficiency of the Inceptionv3-LSTM-COV model outperformed the previous study's performance and achieved better results compared to MLCNN-COV, Inceptionv3, ResNet50, MobileNetv2, VGG19, and DenseNet201 models. The proposed model reported the highest accuracy value of 99.19% in predicting adverse reactions to COVID medicine.","PeriodicalId":11901,"journal":{"name":"ETRI Journal","volume":"12 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139981273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ayoung Kim, Eun-Vin An, Soon-heung Jung, Hyon-Gon Choo, Jeongil Seo, Kwang-deok Seo
A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.
{"title":"Suboptimal video coding for machines method based on selective activation of in-loop filter","authors":"Ayoung Kim, Eun-Vin An, Soon-heung Jung, Hyon-Gon Choo, Jeongil Seo, Kwang-deok Seo","doi":"10.4218/etrij.2023-0085","DOIUrl":"10.4218/etrij.2023-0085","url":null,"abstract":"<p>A conventional codec aims to increase the compression efficiency for transmission and storage while maintaining video quality. However, as the number of platforms using machine vision rapidly increases, a codec that increases the compression efficiency and maintains the accuracy of machine vision tasks must be devised. Hence, the Moving Picture Experts Group created a standardization process for video coding for machines (VCM) to reduce bitrates while maintaining the accuracy of machine vision tasks. In particular, in-loop filters have been developed for improving the subjective quality and machine vision task accuracy. However, the high computational complexity of in-loop filters limits the development of a high-performance VCM architecture. We analyze the effect of an in-loop filter on the VCM performance and propose a suboptimal VCM method based on the selective activation of in-loop filters. The proposed method reduces the computation time for video coding by approximately 5% when using the enhanced compression model and 2% when employing a Versatile Video Coding test model while maintaining the machine vision accuracy and compression efficiency of the VCM architecture.</p>","PeriodicalId":11901,"journal":{"name":"ETRI Journal","volume":"46 3","pages":"538-549"},"PeriodicalIF":1.4,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.4218/etrij.2023-0085","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139981328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sangyeop Yeo, Yu-Seung Ma, Sang Cheol Kim, Hyungkook Jun, Taeho Kim
Large language models (LLMs) have revolutionized various applications in natural language processing and exhibited proficiency in generating programming code. We propose a framework for evaluating the code generation ability of LLMs and introduce a new metric,