Pub Date : 2024-06-10DOI: 10.18517/ijaseit.14.3.18079
Sanghoo Yoon, Young A Kim
This study investigates media coverage of cosmetic surgery in South Korea from 2014 to 2023 using text mining techniques applied to news articles from BigKinds. It focuses on assessing the prevalence of objective information and the societal impacts of capital-driven misinformation. The research methodology involved optimal topic modeling through perplexity, likelihood, BIC, and similarity measures, identifying five themes within the cosmetic surgery news corpus. Further analysis included quantitative topic recognition via fuzzy clustering by period, sentiment analysis, and network analysis utilizing n-gram techniques to explore relationships between key terms. Findings reveal five main topics covered in cosmetic surgery news: Consumer Psychology, Cosmetic Surgery Market, Cosmetic Companies and Technologies, Side Effects and Incidents, and the Tourism Industry. The period from 2014 to 2016 saw significant coverage, particularly on medical tourism and surgical side effects, while in 2017, attention shifted to the surgical process and market stability. From 2018 onward, news coverage expanded, especially in May, focusing on cosmetic technology and related industries amid increased outdoor activities. With the COVID-19 pandemic in 2020, there was a resurgence in coverage of the cosmetic surgery market. In 2023, post-pandemic, there was an uptick in articles related to cosmetic surgery technology industries and support funds. The core words in cosmetic surgery news were spreading around "plastic surgery," "China," and "Botulinum". The study sheds light on the potential influence of capital on media portrayals of cosmetic surgery and the resulting societal consequences of misinformation.
{"title":"Aesthetic Plastic Surgery Issues During the COVID-19 Period Using Topic Modeling","authors":"Sanghoo Yoon, Young A Kim","doi":"10.18517/ijaseit.14.3.18079","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.18079","url":null,"abstract":"This study investigates media coverage of cosmetic surgery in South Korea from 2014 to 2023 using text mining techniques applied to news articles from BigKinds. It focuses on assessing the prevalence of objective information and the societal impacts of capital-driven misinformation. The research methodology involved optimal topic modeling through perplexity, likelihood, BIC, and similarity measures, identifying five themes within the cosmetic surgery news corpus. Further analysis included quantitative topic recognition via fuzzy clustering by period, sentiment analysis, and network analysis utilizing n-gram techniques to explore relationships between key terms. Findings reveal five main topics covered in cosmetic surgery news: Consumer Psychology, Cosmetic Surgery Market, Cosmetic Companies and Technologies, Side Effects and Incidents, and the Tourism Industry. The period from 2014 to 2016 saw significant coverage, particularly on medical tourism and surgical side effects, while in 2017, attention shifted to the surgical process and market stability. From 2018 onward, news coverage expanded, especially in May, focusing on cosmetic technology and related industries amid increased outdoor activities. With the COVID-19 pandemic in 2020, there was a resurgence in coverage of the cosmetic surgery market. In 2023, post-pandemic, there was an uptick in articles related to cosmetic surgery technology industries and support funds. The core words in cosmetic surgery news were spreading around \"plastic surgery,\" \"China,\" and \"Botulinum\". The study sheds light on the potential influence of capital on media portrayals of cosmetic surgery and the resulting societal consequences of misinformation.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":" 28","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141365312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-10DOI: 10.18517/ijaseit.14.3.19606
Lukman Heryawan, Dian Novitaningrum, Kartika Rizqi Nastiti, Salsabila Nurulfarah Mahmudah
The growth of medical record documents is increasing over time, and the various types of diseases and therapies needed are increasing. However, this has not been followed by an effective and efficient search process. This study aims to deal with search problems that often take a long time with search results that are not necessarily as expected by building a search model for medical record documents using the vector space model (VSM) and TF-IDF methods. The VSM method allows retrieval of results that are not the same as the search queries entered by the user but are expected to provide still results relevant to the user's desired needs. The model development process was taken based on the data in the FS_ANAMNESA and FS_DIAGNOSA columns, followed by preprocessing, which consists of deleting blank lines, lowercase, removing punctuation marks, HTML tags, stop words, excess spaces between words, and normalizing typo words, then forming a TF-IDF matrix based on the frequency of occurrence of each word feature, and followed by the calculation of the similarity value of the search query compared to medical record documents based on the cosine similarity formula. The retrieval results were all columns of each existing medical record document and were sorted based on 10 rows with the highest similarity value. The model evaluation results were based on 1000 medical record documents and tested with 20 search queries in this study, which gave an average precision value of 0.548 and an average recall value of 0.796.
{"title":"Medical Record Document Search with TF-IDF and Vector Space Model (VSM)","authors":"Lukman Heryawan, Dian Novitaningrum, Kartika Rizqi Nastiti, Salsabila Nurulfarah Mahmudah","doi":"10.18517/ijaseit.14.3.19606","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.19606","url":null,"abstract":"The growth of medical record documents is increasing over time, and the various types of diseases and therapies needed are increasing. However, this has not been followed by an effective and efficient search process. This study aims to deal with search problems that often take a long time with search results that are not necessarily as expected by building a search model for medical record documents using the vector space model (VSM) and TF-IDF methods. The VSM method allows retrieval of results that are not the same as the search queries entered by the user but are expected to provide still results relevant to the user's desired needs. The model development process was taken based on the data in the FS_ANAMNESA and FS_DIAGNOSA columns, followed by preprocessing, which consists of deleting blank lines, lowercase, removing punctuation marks, HTML tags, stop words, excess spaces between words, and normalizing typo words, then forming a TF-IDF matrix based on the frequency of occurrence of each word feature, and followed by the calculation of the similarity value of the search query compared to medical record documents based on the cosine similarity formula. The retrieval results were all columns of each existing medical record document and were sorted based on 10 rows with the highest similarity value. The model evaluation results were based on 1000 medical record documents and tested with 20 search queries in this study, which gave an average precision value of 0.548 and an average recall value of 0.796.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"105 44","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141361484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.18517/ijaseit.14.3.19993
Yuanming Liu, Rodziah Latih
With the continuous development of technology, the types of malware and their variants continue to increase, which has become an enormous challenge to network security. These malware use a variety of technical means to deceive or evade traditional detection methods, making traditional signature-based rule-based malware identification methods no longer applicable. Many machine algorithms have attracted widespread academic attention as powerful malware detection and classification methods in recent years. After an in-depth study of rich literature and a comprehensive survey of the latest scientific research results, feature extraction is used as the basis for classification. By extracting meaningful features from malware samples, such as behavioral patterns, code structures, and file attributes, researchers can discern unique characteristics that distinguish malicious software from benign ones. This process is the foundation for developing effective detection models and understanding the underlying mechanisms of malware behavior. We divide feature engineering and learning-based methods into two categories for investigation. Feature engineering involves selecting and extracting relevant features from raw data, while learning-based methods leverage machine learning algorithms to analyze and classify malware based on these features. Supervised, unsupervised, and deep learning techniques have shown promise in accurately detecting and classifying malware, even in the face of evolving threats. On this basis, we further look into the current problems and challenges malware identification research faces.
{"title":"A Comprehensive Review of Machine Learning Approaches for Detecting Malicious Software","authors":"Yuanming Liu, Rodziah Latih","doi":"10.18517/ijaseit.14.3.19993","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.19993","url":null,"abstract":"With the continuous development of technology, the types of malware and their variants continue to increase, which has become an enormous challenge to network security. These malware use a variety of technical means to deceive or evade traditional detection methods, making traditional signature-based rule-based malware identification methods no longer applicable. Many machine algorithms have attracted widespread academic attention as powerful malware detection and classification methods in recent years. After an in-depth study of rich literature and a comprehensive survey of the latest scientific research results, feature extraction is used as the basis for classification. By extracting meaningful features from malware samples, such as behavioral patterns, code structures, and file attributes, researchers can discern unique characteristics that distinguish malicious software from benign ones. This process is the foundation for developing effective detection models and understanding the underlying mechanisms of malware behavior. We divide feature engineering and learning-based methods into two categories for investigation. Feature engineering involves selecting and extracting relevant features from raw data, while learning-based methods leverage machine learning algorithms to analyze and classify malware based on these features. Supervised, unsupervised, and deep learning techniques have shown promise in accurately detecting and classifying malware, even in the face of evolving threats. On this basis, we further look into the current problems and challenges malware identification research faces.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"15 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141384906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.18517/ijaseit.14.3.19747
D. Devianto, Maiyastri, Y. Asdi, Sri Maryati, Surya Puspita Sari, Rahmat Hidayat
A control chart is a crucial statistical tool for tracking the average quality of the dispersion. A more sensitive control chart is also developed to detect minor changes in the efficiency monitoring process, along with the times when using multivariate and mixed models. The well-known multivariate control chart was introduced as T2 Hotelling; then, to achieve better sensitivity in multivariable, a control chart design was developed for MEWMA and MCUSUM. To find a more sensitive multivariate control chart, it is proposed the control chart MCUSUM type I (MC I) and MCUSUM type II (MC II), and their combination of efficiency as the Mixed MEWMA-MCUSUM type I (MEC I), and the Mixed MEWMA-MCUSUM type II (MEC II). This study was carried out to assess which multivariate control chart is more sensitive by focusing on the ability of the control chart to detect more out-of-control observations in a single control phase. This study used data on the manufacture of wheat flour with 1,380 observations, 30 subgroups, and 46 observations per subgroup. Moisture, ash, and gluten are the quality-related manufacturing data variables used. This study aims to develop the best-mixed control chart design of efficiency for production and quality process monitoring of flour production. Based on the study's findings, the MEC I control chart was shown to be the most sensitive, and this study also demonstrates that it is more sensitive than other multivariate control charts.
{"title":"The Mixed MEWMA and MCUSUM Control Chart Design of Efficiency Series Data of Production Quality Process Monitoring","authors":"D. Devianto, Maiyastri, Y. Asdi, Sri Maryati, Surya Puspita Sari, Rahmat Hidayat","doi":"10.18517/ijaseit.14.3.19747","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.19747","url":null,"abstract":"A control chart is a crucial statistical tool for tracking the average quality of the dispersion. A more sensitive control chart is also developed to detect minor changes in the efficiency monitoring process, along with the times when using multivariate and mixed models. The well-known multivariate control chart was introduced as T2 Hotelling; then, to achieve better sensitivity in multivariable, a control chart design was developed for MEWMA and MCUSUM. To find a more sensitive multivariate control chart, it is proposed the control chart MCUSUM type I (MC I) and MCUSUM type II (MC II), and their combination of efficiency as the Mixed MEWMA-MCUSUM type I (MEC I), and the Mixed MEWMA-MCUSUM type II (MEC II). This study was carried out to assess which multivariate control chart is more sensitive by focusing on the ability of the control chart to detect more out-of-control observations in a single control phase. This study used data on the manufacture of wheat flour with 1,380 observations, 30 subgroups, and 46 observations per subgroup. Moisture, ash, and gluten are the quality-related manufacturing data variables used. This study aims to develop the best-mixed control chart design of efficiency for production and quality process monitoring of flour production. Based on the study's findings, the MEC I control chart was shown to be the most sensitive, and this study also demonstrates that it is more sensitive than other multivariate control charts.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"26 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141384855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.18517/ijaseit.14.3.18073
Dong Ok Kim, MinSu Chae, Hwamin Lee
Cardiovascular diseases, a leading cause of global mortality, underscore the urgency for refined diagnostic techniques. Among these, cardiomyopathies characterized by abnormal heart wall thickening present a formidable challenge, exacerbated by aging populations and the side effects of chemotherapy. Traditional echocardiogram analysis, demanding considerable time and expertise, now faces overwhelming pressure due to escalating demands for cardiac care. This study addresses these challenges by harnessing the potential of Convolutional Neural Networks, specifically YOLOv8, U-Net, and Attention U-Net, leveraging the EchoNet-Dynamic dataset from Stanford University Hospital to segment echocardiographic images. Our investigation aimed to optimize and compare these models for segmenting the left ventricle in echocardiography images, a crucial step for quantifying key cardiac parameters. We demonstrate the superiority of U-Net and Attention U-Net over YOLOv8, with Attention U-Net achieving the highest Dice Coefficient Score due to its focus on relevant features via attention mechanisms. This finding highlights the importance of model specificity in medical image segmentation and points to attention mechanisms. The integration of AI in echocardiography represents a pivotal shift toward precision medicine, improving diagnostic accuracy and operational efficiency. Our results advocate for the continued development and application of AI-driven models, underscoring their potential to transform cardiovascular diagnostics through enhanced precision and multimodal data integration. This study validates the effectiveness of state-of-the-art AI models in cardiac function assessment and paves the way for their implementation in clinical settings, thereby contributing significantly to the advancement of cardiac healthcare delivery.
{"title":"Revolutionizing Echocardiography: A Comparative Study of Advanced AI Models for Precise Left Ventricular Segmentation","authors":"Dong Ok Kim, MinSu Chae, Hwamin Lee","doi":"10.18517/ijaseit.14.3.18073","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.18073","url":null,"abstract":"Cardiovascular diseases, a leading cause of global mortality, underscore the urgency for refined diagnostic techniques. Among these, cardiomyopathies characterized by abnormal heart wall thickening present a formidable challenge, exacerbated by aging populations and the side effects of chemotherapy. Traditional echocardiogram analysis, demanding considerable time and expertise, now faces overwhelming pressure due to escalating demands for cardiac care. This study addresses these challenges by harnessing the potential of Convolutional Neural Networks, specifically YOLOv8, U-Net, and Attention U-Net, leveraging the EchoNet-Dynamic dataset from Stanford University Hospital to segment echocardiographic images. Our investigation aimed to optimize and compare these models for segmenting the left ventricle in echocardiography images, a crucial step for quantifying key cardiac parameters. We demonstrate the superiority of U-Net and Attention U-Net over YOLOv8, with Attention U-Net achieving the highest Dice Coefficient Score due to its focus on relevant features via attention mechanisms. This finding highlights the importance of model specificity in medical image segmentation and points to attention mechanisms. The integration of AI in echocardiography represents a pivotal shift toward precision medicine, improving diagnostic accuracy and operational efficiency. Our results advocate for the continued development and application of AI-driven models, underscoring their potential to transform cardiovascular diagnostics through enhanced precision and multimodal data integration. This study validates the effectiveness of state-of-the-art AI models in cardiac function assessment and paves the way for their implementation in clinical settings, thereby contributing significantly to the advancement of cardiac healthcare delivery.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"48 s15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141383161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.18517/ijaseit.14.3.18078
Jun-Hyeong Lee, Ki-Sang Song
Facial emotion recognition is one of the popular tasks in computer vision. Face recognition techniques based on deep learning can provide the best face recognition performance, but using these techniques requires a lot of labeled face data. Available large-scale facial datasets are predominantly Western and contain very few Asians. We found that models trained using these datasets were less accurate at identifying Asians than Westerners. Therefore, to increase the accuracy of Asians' facial identification, we compared and analyzed various CNN models that had been previously studied. We also added Asian faces and face data in realistic situations to the existing dataset and compared the results. As a result of model comparison, VGG16 and Xception models showed high prediction rates for facial emotion recognition in this study. and the more diverse the dataset, the higher the prediction rate. The prediction rate of the East Asian dataset for the model trained on FER2013 was relatively low. However, for data learned with KFE, the model prediction of FER2013 was predicted to be relatively high. However, because the number of East Asian datasets is small, caution is needed in interpretation. Through this study, it was confirmed that large CNN models can be used for facial emotion analysis, but that selection of an appropriate model is essential. In addition, it was confirmed once again that a variety of datasets and the prediction rate increase as a large amount of data is learned.
{"title":"Comparison and Analysis of CNN Models to Improve a Facial Emotion Classification Accuracy for Koreans and East Asians","authors":"Jun-Hyeong Lee, Ki-Sang Song","doi":"10.18517/ijaseit.14.3.18078","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.18078","url":null,"abstract":"Facial emotion recognition is one of the popular tasks in computer vision. Face recognition techniques based on deep learning can provide the best face recognition performance, but using these techniques requires a lot of labeled face data. Available large-scale facial datasets are predominantly Western and contain very few Asians. We found that models trained using these datasets were less accurate at identifying Asians than Westerners. Therefore, to increase the accuracy of Asians' facial identification, we compared and analyzed various CNN models that had been previously studied. We also added Asian faces and face data in realistic situations to the existing dataset and compared the results. As a result of model comparison, VGG16 and Xception models showed high prediction rates for facial emotion recognition in this study. and the more diverse the dataset, the higher the prediction rate. The prediction rate of the East Asian dataset for the model trained on FER2013 was relatively low. However, for data learned with KFE, the model prediction of FER2013 was predicted to be relatively high. However, because the number of East Asian datasets is small, caution is needed in interpretation. Through this study, it was confirmed that large CNN models can be used for facial emotion analysis, but that selection of an appropriate model is essential. In addition, it was confirmed once again that a variety of datasets and the prediction rate increase as a large amount of data is learned.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"52 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141388521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-03DOI: 10.18517/ijaseit.14.3.18671
Gilsang Yoo, Sungdae Hong, Hyeocheol Kim
Background music in OTT services significantly enhances narratives and conveys emotions, yet users with hearing impairments might not fully experience this emotional context. This paper illuminates the pivotal role of background music in user engagement on OTT platforms. It introduces a novel system designed to mitigate the challenges the hearing-impaired face in appreciating the emotional nuances of music. This system adeptly identifies the mood of background music and translates it into textual subtitles, making emotional content accessible to all users. The proposed method extracts key audio features, including Mel Frequency Cepstral Coefficients (MFCC), Root Mean Square (RMS), and MEL Spectrograms. It then harnesses the power of leading machine learning algorithms Logistic Regression, Random Forest, AdaBoost, and Support Vector Classification (SVC) to analyze the emotional traits embedded in the music and accurately identify its sentiment. Among these, the Random Forest algorithm, applied to MFCC features, demonstrated exceptional accuracy, reaching 94.8% in our tests. The significance of this technology extends beyond mere feature identification; it promises to revolutionize the accessibility of multimedia content. By automatically generating emotionally resonant subtitles, this system can enrich the viewing experience for all, particularly those with hearing impairments. This advancement not only underscores the critical role of music in storytelling and emotional engagement but also highlights the vast potential of machine learning in enhancing the inclusivity and enjoyment of digital entertainment across diverse audiences.
OTT 服务中的背景音乐极大地增强了叙事效果并传达了情感,但有听力障碍的用户可能无法充分体验这种情感氛围。本文阐明了背景音乐在 OTT 平台用户参与中的关键作用。它介绍了一种新颖的系统,旨在减轻听障人士在欣赏音乐的情感细微差别时所面临的挑战。该系统能巧妙地识别背景音乐的情绪,并将其翻译成文字幕,使所有用户都能理解情感内容。所提出的方法可提取关键音频特征,包括梅尔频率倒频谱系数(MFCC)、均方根(RMS)和 MEL 频谱。然后,它利用领先的机器学习算法 Logistic Regression、Random Forest、AdaBoost 和支持向量分类 (SVC) 的强大功能来分析音乐中蕴含的情感特征,并准确识别其情感。其中,应用于 MFCC 特征的随机森林算法在我们的测试中表现出了极高的准确率,达到了 94.8%。这项技术的意义不仅限于特征识别,它有望彻底改变多媒体内容的可访问性。通过自动生成情感共鸣字幕,该系统可以丰富所有人的观看体验,尤其是有听力障碍的人。这一进步不仅强调了音乐在讲故事和情感投入方面的关键作用,还凸显了机器学习在提高数字娱乐的包容性和不同受众的欣赏水平方面的巨大潜力。
{"title":"Emotion Recognition and Multi-class Classification in Music with MFCC and Machine Learning","authors":"Gilsang Yoo, Sungdae Hong, Hyeocheol Kim","doi":"10.18517/ijaseit.14.3.18671","DOIUrl":"https://doi.org/10.18517/ijaseit.14.3.18671","url":null,"abstract":"Background music in OTT services significantly enhances narratives and conveys emotions, yet users with hearing impairments might not fully experience this emotional context. This paper illuminates the pivotal role of background music in user engagement on OTT platforms. It introduces a novel system designed to mitigate the challenges the hearing-impaired face in appreciating the emotional nuances of music. This system adeptly identifies the mood of background music and translates it into textual subtitles, making emotional content accessible to all users. The proposed method extracts key audio features, including Mel Frequency Cepstral Coefficients (MFCC), Root Mean Square (RMS), and MEL Spectrograms. It then harnesses the power of leading machine learning algorithms Logistic Regression, Random Forest, AdaBoost, and Support Vector Classification (SVC) to analyze the emotional traits embedded in the music and accurately identify its sentiment. Among these, the Random Forest algorithm, applied to MFCC features, demonstrated exceptional accuracy, reaching 94.8% in our tests. The significance of this technology extends beyond mere feature identification; it promises to revolutionize the accessibility of multimedia content. By automatically generating emotionally resonant subtitles, this system can enrich the viewing experience for all, particularly those with hearing impairments. This advancement not only underscores the critical role of music in storytelling and emotional engagement but also highlights the vast potential of machine learning in enhancing the inclusivity and enjoyment of digital entertainment across diverse audiences.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"15 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141388291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.18517/ijaseit.14.2.18807
Nova Suparmanto, Anna Maria Sri Asih, Andi Sudiarso, Insap Santoso
The popularity of batik reflects consumer demands for purchasing fulfillment. Digital technology can bring several advantages and new opportunities to the custom batik design process. Computer-Aided Design (CAD) is a popular digital tool used. This paper will provide the new User Interface (UI) design of CAD custom design batik, namely Batik 4.0. It’s software developed to provide services, such as pattern customization in various sizes, batik character input system, pricing, and production time estimation utilized by the batik industry in Indonesia that can be directly forwarded to the manufacturing process. Research in UI design for CAD batik has not been studied, and this paper will fill the significant gap and upgrade the usability. This evaluation will test the usability of the test, employing the performance matrix and RTA towards UI that already exists. Improvement was made with a wireframe using human-computer interaction (HCI) and usability testing data. User-centered design (UCD) focuses on the role of the user in the process of system development as the wireframe design method. This research shows UI wireframe design for several types of users. In making the UI wireframe design, a usability evaluation was performed again. The evaluation result shows that the new user prototypes have fewer errors and exceed the usability value compared to the previous one. The new design is more user-friendly and can be used as a reference for the future development and improvement of CAD custom batik.
蜡染的流行反映了消费者对购买满足感的需求。数字技术可为定制蜡染设计过程带来多种优势和新机遇。计算机辅助设计(CAD)是一种常用的数字化工具。本文将提供 CAD 蜡染定制设计的新用户界面(UI)设计,即 Batik 4.0。开发该软件的目的是提供各种服务,如各种尺寸的图案定制、蜡染字符输入系统、定价以及印尼蜡染行业使用的生产时间估算,这些服务可直接转入生产流程。CAD 蜡染用户界面设计方面的研究尚未开展,本文将填补这一重大空白,并提升可用性。本评估将采用性能矩阵和 RTA 对已有的用户界面进行可用性测试。利用人机交互(HCI)和可用性测试数据对线框进行改进。以用户为中心的设计(UCD)与线框图设计方法一样,注重用户在系统开发过程中的作用。本研究展示了针对几类用户的用户界面线框设计。在进行用户界面线框设计时,再次进行了可用性评估。评估结果表明,新的用户原型与之前的原型相比,错误更少,可用性值更高。新设计对用户更加友好,可作为今后开发和改进 CAD 定制蜡染的参考。
{"title":"The Design and Evaluation of CAD Custom Batik User Interface","authors":"Nova Suparmanto, Anna Maria Sri Asih, Andi Sudiarso, Insap Santoso","doi":"10.18517/ijaseit.14.2.18807","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.18807","url":null,"abstract":"The popularity of batik reflects consumer demands for purchasing fulfillment. Digital technology can bring several advantages and new opportunities to the custom batik design process. Computer-Aided Design (CAD) is a popular digital tool used. This paper will provide the new User Interface (UI) design of CAD custom design batik, namely Batik 4.0. It’s software developed to provide services, such as pattern customization in various sizes, batik character input system, pricing, and production time estimation utilized by the batik industry in Indonesia that can be directly forwarded to the manufacturing process. Research in UI design for CAD batik has not been studied, and this paper will fill the significant gap and upgrade the usability. This evaluation will test the usability of the test, employing the performance matrix and RTA towards UI that already exists. Improvement was made with a wireframe using human-computer interaction (HCI) and usability testing data. User-centered design (UCD) focuses on the role of the user in the process of system development as the wireframe design method. This research shows UI wireframe design for several types of users. In making the UI wireframe design, a usability evaluation was performed again. The evaluation result shows that the new user prototypes have fewer errors and exceed the usability value compared to the previous one. The new design is more user-friendly and can be used as a reference for the future development and improvement of CAD custom batik.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"17 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140699004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.18517/ijaseit.14.2.20007
Dao Nam Cao, Anish Jafrin Thilak Johnson
Environmental pollution from transportation means and natural resource degradation are the top concern globally. According to statistics, NOx and PM emissions from vehicles account for 70% of total emissions in urban areas. Therefore, finding solutions to reduce NOx and PM emissions is necessary. Changing the engine's internal combustion method is considered promising and influential among the known solutions. One of the research directions is a combustion engine using the Premixed Charge Compression Ignition (PCCI) method combined with biofuels to improve the mixture formation and combustion process, reducing NOx and PM emissions. Therefore, this study presents the mechanism of the formation of PM and NOx emissions in the traditional combustion and the low-temperature combustion process of internal combustion engines. Besides, the theoretical basis of flame spread during combustion is also introduced. The key feature of this research is that it has modeled the combustion process in diesel engines under the PCCI modes. This was accomplished using blends of waste cooking oil (WCO)-based biodiesel and diesel fuel, as well as the ANSYS Fluent software. The results showed that PCCI combustion using B20 fuel can significantly reduce NOx and PM emissions, although HC and CO emissions tend to increase, and thermal efficiency tends to decrease. In further studies, different modes of the PCCI combustion process should be thoroughly examined so that this process can be implemented in practice to reduce pollutant emissions.
{"title":"A Simulation Study on a Premixed-charge Compression Ignition Mode-based Engine Using a Blend of Biodiesel/Diesel Fuel under a Split Injection Strategy","authors":"Dao Nam Cao, Anish Jafrin Thilak Johnson","doi":"10.18517/ijaseit.14.2.20007","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.20007","url":null,"abstract":"Environmental pollution from transportation means and natural resource degradation are the top concern globally. According to statistics, NOx and PM emissions from vehicles account for 70% of total emissions in urban areas. Therefore, finding solutions to reduce NOx and PM emissions is necessary. Changing the engine's internal combustion method is considered promising and influential among the known solutions. One of the research directions is a combustion engine using the Premixed Charge Compression Ignition (PCCI) method combined with biofuels to improve the mixture formation and combustion process, reducing NOx and PM emissions. Therefore, this study presents the mechanism of the formation of PM and NOx emissions in the traditional combustion and the low-temperature combustion process of internal combustion engines. Besides, the theoretical basis of flame spread during combustion is also introduced. The key feature of this research is that it has modeled the combustion process in diesel engines under the PCCI modes. This was accomplished using blends of waste cooking oil (WCO)-based biodiesel and diesel fuel, as well as the ANSYS Fluent software. The results showed that PCCI combustion using B20 fuel can significantly reduce NOx and PM emissions, although HC and CO emissions tend to increase, and thermal efficiency tends to decrease. In further studies, different modes of the PCCI combustion process should be thoroughly examined so that this process can be implemented in practice to reduce pollutant emissions.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"48 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140701803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-14DOI: 10.18517/ijaseit.14.2.19291
O. A. Qowiy, W. A. Aspar, Herry Susanto, T. Fiantika, Suwarjono, A. Muharam, F. D. Setiawan, Rahmat Burhanuddin
In the transportation network, railway bridges are crucial for the transfer of both passengers and commodities. Railway bridges require continuous monitoring to observe their performance. A structural health monitoring system is one method for assessing the viability of a railway bridge structure. The functioning of railroad bridge structures has been extensively observed using wireless technology. This research aims to implement smart wireless sensors for monitoring the structural health of the railway bridge online in real-time during operation. Many sensor kinds were installed on the railway bridge, including strain gauges, accelerometers, linear variable displacement transducers, and proximity sensors. Geometric modeling and numerical simulation were performed to find critical frame locations on the railway bridge where the instrumentation sensors would be placed. In this study, MONITA® is employed for data acquisition modules. The MONITA® system consists of a combination of hardware and software that functions to retrieve, send, store, and process data. This paper describes the result of the establishment of this method to comprehend the performance of the steel railway bridge structure in real-time via the human-machine interface display dashboard. As a result, the monitoring system can appropriately be used to assess a structural railway bridge in real-time. This study may be helpful to practicing engineers and researchers in future studies of steel railway bridge evaluation. This could be a useful reference for future studies in implementing such systems as the railway bridge early warning system technique in detecting bridge damage.
{"title":"Online Real-Time Monitoring System of A Structural Steel Railway Bridge Using Wireless Smart Sensors","authors":"O. A. Qowiy, W. A. Aspar, Herry Susanto, T. Fiantika, Suwarjono, A. Muharam, F. D. Setiawan, Rahmat Burhanuddin","doi":"10.18517/ijaseit.14.2.19291","DOIUrl":"https://doi.org/10.18517/ijaseit.14.2.19291","url":null,"abstract":"In the transportation network, railway bridges are crucial for the transfer of both passengers and commodities. Railway bridges require continuous monitoring to observe their performance. A structural health monitoring system is one method for assessing the viability of a railway bridge structure. The functioning of railroad bridge structures has been extensively observed using wireless technology. This research aims to implement smart wireless sensors for monitoring the structural health of the railway bridge online in real-time during operation. Many sensor kinds were installed on the railway bridge, including strain gauges, accelerometers, linear variable displacement transducers, and proximity sensors. Geometric modeling and numerical simulation were performed to find critical frame locations on the railway bridge where the instrumentation sensors would be placed. In this study, MONITA® is employed for data acquisition modules. The MONITA® system consists of a combination of hardware and software that functions to retrieve, send, store, and process data. This paper describes the result of the establishment of this method to comprehend the performance of the steel railway bridge structure in real-time via the human-machine interface display dashboard. As a result, the monitoring system can appropriately be used to assess a structural railway bridge in real-time. This study may be helpful to practicing engineers and researchers in future studies of steel railway bridge evaluation. This could be a useful reference for future studies in implementing such systems as the railway bridge early warning system technique in detecting bridge damage.","PeriodicalId":14471,"journal":{"name":"International Journal on Advanced Science, Engineering and Information Technology","volume":"14 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140706196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}