Purpose: This study aims to predict the closing price of the EUR/JPY currency pair in the forex market using recurrent neural network (RNN) architectures, namely Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), with the incorporation of Bidirectional layers. Methods: The dataset comprises hourly price data obtained from Yahoo Finance and pre-processed accordingly. The data is divided into training and testing sets, and time series sequences are constructed for input into the models. The RNN, LSTM, and GRU models are trained using the Adam optimization algorithm with the mean squared error (MSE) loss metric. Results: Results indicate that the LSTM model, particularly when coupled with Bidirectional layers, exhibits superior predictive performance compared to the other models, as evidenced by lower MSE values. Conclusions: Therefore, the LSTM model with Bidirectional layers is the most effective in predicting the EUR/JPY currency pair's closing price in the forex market. These findings offer valuable insights for practitioners and researchers involved in financial market prediction and neural network modelling
{"title":"Forecasting Next-Time-Step Forex Market Stock Prices Using Neural Networks","authors":"","doi":"10.33140/amlai.05.02.09","DOIUrl":"https://doi.org/10.33140/amlai.05.02.09","url":null,"abstract":"Purpose: This study aims to predict the closing price of the EUR/JPY currency pair in the forex market using recurrent neural network (RNN) architectures, namely Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), with the incorporation of Bidirectional layers. Methods: The dataset comprises hourly price data obtained from Yahoo Finance and pre-processed accordingly. The data is divided into training and testing sets, and time series sequences are constructed for input into the models. The RNN, LSTM, and GRU models are trained using the Adam optimization algorithm with the mean squared error (MSE) loss metric. Results: Results indicate that the LSTM model, particularly when coupled with Bidirectional layers, exhibits superior predictive performance compared to the other models, as evidenced by lower MSE values. Conclusions: Therefore, the LSTM model with Bidirectional layers is the most effective in predicting the EUR/JPY currency pair's closing price in the forex market. These findings offer valuable insights for practitioners and researchers involved in financial market prediction and neural network modelling","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141127384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose The quantitative study explores IT professionals' perspectives on factors influencing cloud computing adoption using zero-trust environments in government agencies and understanding cloud computing's various security challenges hindering organizations' information technology modernization from adopting cloud services. Design/methodology/approach The extended TAM-TOE model, integrating the Technology-Organization-Environment (TOE) framework and the Technology Acceptance Model (TAM), was applied to explore the variables influencing cloud adoption. Sample data from 178 IT professionals employed by government agencies with experience in cloud computing technology and zero-trust security were collected for statistical analysis to answer research questions and test hypotheses. Three regression models were used to analyze and determine how the extended TAM-TOE factors influence cloud adoption using zero-trust environments. Findings The extended TAM-TOE model is appropriate for studying cloud adoption in a zero-trust environment. The model explains and reveals the various factors that can be used to predict cloud computing adoption. The variables complexity, top management support, and training and education significantly predicted perceptions about cloud computing's ease of use; compatibility and perceived ease of use significantly predicted perceptions about cloud computing's usefulness; trading partner support, perceived ease of use, and perceived usefulness significantly predicted cloud adoption intention in a zerotrust environment. Practical Implication Future researchers could build on the study's findings to advance design studies on cloud computing adoption in zero-trust environments. Zero trust can be studied as an independent variable for understanding the incentives or barriers impacting cloud adoption intention. Originality/value The research contributes to the literature gap on factors impacting cloud computing adoption using zerotrust environments. It presents the significant factors influencing cloud adoption, providing a roadmap to secure cloud services to meet regulatory requirements.
{"title":"Factors Influencing Cloud Computing Adoption in a Zero-Trust Environment","authors":"","doi":"10.33140/amlai.05.01.03","DOIUrl":"https://doi.org/10.33140/amlai.05.01.03","url":null,"abstract":"Purpose The quantitative study explores IT professionals' perspectives on factors influencing cloud computing adoption using zero-trust environments in government agencies and understanding cloud computing's various security challenges hindering organizations' information technology modernization from adopting cloud services. Design/methodology/approach The extended TAM-TOE model, integrating the Technology-Organization-Environment (TOE) framework and the Technology Acceptance Model (TAM), was applied to explore the variables influencing cloud adoption. Sample data from 178 IT professionals employed by government agencies with experience in cloud computing technology and zero-trust security were collected for statistical analysis to answer research questions and test hypotheses. Three regression models were used to analyze and determine how the extended TAM-TOE factors influence cloud adoption using zero-trust environments. Findings The extended TAM-TOE model is appropriate for studying cloud adoption in a zero-trust environment. The model explains and reveals the various factors that can be used to predict cloud computing adoption. The variables complexity, top management support, and training and education significantly predicted perceptions about cloud computing's ease of use; compatibility and perceived ease of use significantly predicted perceptions about cloud computing's usefulness; trading partner support, perceived ease of use, and perceived usefulness significantly predicted cloud adoption intention in a zerotrust environment. Practical Implication Future researchers could build on the study's findings to advance design studies on cloud computing adoption in zero-trust environments. Zero trust can be studied as an independent variable for understanding the incentives or barriers impacting cloud adoption intention. Originality/value The research contributes to the literature gap on factors impacting cloud computing adoption using zerotrust environments. It presents the significant factors influencing cloud adoption, providing a roadmap to secure cloud services to meet regulatory requirements.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"13 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139804022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose The quantitative study explores IT professionals' perspectives on factors influencing cloud computing adoption using zero-trust environments in government agencies and understanding cloud computing's various security challenges hindering organizations' information technology modernization from adopting cloud services. Design/methodology/approach The extended TAM-TOE model, integrating the Technology-Organization-Environment (TOE) framework and the Technology Acceptance Model (TAM), was applied to explore the variables influencing cloud adoption. Sample data from 178 IT professionals employed by government agencies with experience in cloud computing technology and zero-trust security were collected for statistical analysis to answer research questions and test hypotheses. Three regression models were used to analyze and determine how the extended TAM-TOE factors influence cloud adoption using zero-trust environments. Findings The extended TAM-TOE model is appropriate for studying cloud adoption in a zero-trust environment. The model explains and reveals the various factors that can be used to predict cloud computing adoption. The variables complexity, top management support, and training and education significantly predicted perceptions about cloud computing's ease of use; compatibility and perceived ease of use significantly predicted perceptions about cloud computing's usefulness; trading partner support, perceived ease of use, and perceived usefulness significantly predicted cloud adoption intention in a zerotrust environment. Practical Implication Future researchers could build on the study's findings to advance design studies on cloud computing adoption in zero-trust environments. Zero trust can be studied as an independent variable for understanding the incentives or barriers impacting cloud adoption intention. Originality/value The research contributes to the literature gap on factors impacting cloud computing adoption using zerotrust environments. It presents the significant factors influencing cloud adoption, providing a roadmap to secure cloud services to meet regulatory requirements.
{"title":"Factors Influencing Cloud Computing Adoption in a Zero-Trust Environment","authors":"","doi":"10.33140/amlai.05.01.03","DOIUrl":"https://doi.org/10.33140/amlai.05.01.03","url":null,"abstract":"Purpose The quantitative study explores IT professionals' perspectives on factors influencing cloud computing adoption using zero-trust environments in government agencies and understanding cloud computing's various security challenges hindering organizations' information technology modernization from adopting cloud services. Design/methodology/approach The extended TAM-TOE model, integrating the Technology-Organization-Environment (TOE) framework and the Technology Acceptance Model (TAM), was applied to explore the variables influencing cloud adoption. Sample data from 178 IT professionals employed by government agencies with experience in cloud computing technology and zero-trust security were collected for statistical analysis to answer research questions and test hypotheses. Three regression models were used to analyze and determine how the extended TAM-TOE factors influence cloud adoption using zero-trust environments. Findings The extended TAM-TOE model is appropriate for studying cloud adoption in a zero-trust environment. The model explains and reveals the various factors that can be used to predict cloud computing adoption. The variables complexity, top management support, and training and education significantly predicted perceptions about cloud computing's ease of use; compatibility and perceived ease of use significantly predicted perceptions about cloud computing's usefulness; trading partner support, perceived ease of use, and perceived usefulness significantly predicted cloud adoption intention in a zerotrust environment. Practical Implication Future researchers could build on the study's findings to advance design studies on cloud computing adoption in zero-trust environments. Zero trust can be studied as an independent variable for understanding the incentives or barriers impacting cloud adoption intention. Originality/value The research contributes to the literature gap on factors impacting cloud computing adoption using zerotrust environments. It presents the significant factors influencing cloud adoption, providing a roadmap to secure cloud services to meet regulatory requirements.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"35 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139864021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GitHub repositories are often used for collaborative development, allowing multiple developers to work on the same codebase and contribute their changes. Each repository is typically associated with a specific project, and it can contain everything from code files to documentation, bug reports, and feature requests. Depending on the context, it can contain files, directories, other resources related to a project, and it is often associated with a particular programming language. By default, GitHub automatically detects the primary programming language used in a repository based on the file extensions and content within the repository. However, this detection is not true all the time; there are some potential issues to consider. One of these problems is that the detected language may not accurately reflect the actual programming languages used in the project, especially if the project utilizes multiple programming languages or has undergone language migrations. In this study, we apply an alternative technology to resolve problems with classifying the programming language of a GitHub repository by analysing file extensions and identifying all programming languages used in the project. We also determine the appropriate primary programming language for the repository. This paper investigates how this technology can address the issues surrounding GitHub’s automatic detection of a repository’s primary programming language and how it can provide information on all the programming languages used in a project.
{"title":"Exploring the Integration of Machine Learning Models in Programming Languages on GitHub: Impact on Compatibility to Address Them","authors":"","doi":"10.33140/amlai.04.02.06","DOIUrl":"https://doi.org/10.33140/amlai.04.02.06","url":null,"abstract":"GitHub repositories are often used for collaborative development, allowing multiple developers to work on the same codebase and contribute their changes. Each repository is typically associated with a specific project, and it can contain everything from code files to documentation, bug reports, and feature requests. Depending on the context, it can contain files, directories, other resources related to a project, and it is often associated with a particular programming language. By default, GitHub automatically detects the primary programming language used in a repository based on the file extensions and content within the repository. However, this detection is not true all the time; there are some potential issues to consider. One of these problems is that the detected language may not accurately reflect the actual programming languages used in the project, especially if the project utilizes multiple programming languages or has undergone language migrations. In this study, we apply an alternative technology to resolve problems with classifying the programming language of a GitHub repository by analysing file extensions and identifying all programming languages used in the project. We also determine the appropriate primary programming language for the repository. This paper investigates how this technology can address the issues surrounding GitHub’s automatic detection of a repository’s primary programming language and how it can provide information on all the programming languages used in a project.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139272102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The harmful impacts of excessive stress on people’s health have been widely acknowledged, necessitating effective methods for its identification. Recognizing the importance of early stress detection and intervention, this research aims to contribute to the field of healthcare. To achieve this objective, this study classifies electrocardiogram (ECG) signals by assessing physio-psychological states, specifically stress and examines the role of music therapy in alleviating stress. ECG signals, recorded both before and after a music therapy session, were collected. Using signal processing techniques, essential features were extracted from these ECG signals, resulting in a more accurate identification of stress. Additionally, through experimentation and model evaluation, k-nearest Neighbors (KNN) and Classification and Regression Trees (CART) were determined to be the most effective models for this classification. Both models consistently yielded 90% accuracy. These identified extracted features and models are vital to effectively recognizing stress in ECG signals, offering valuable insights for future studies and clinical applications. This research contributes not only to the development of tools for stress detection but also to the understanding of the therapeutic impact of music.
{"title":"Stress-based Classification of Electrocardiogram Signals Before and After Music Therapy using Heart Rate Variability and Machine Learning","authors":"","doi":"10.33140/amlai.04.02.05","DOIUrl":"https://doi.org/10.33140/amlai.04.02.05","url":null,"abstract":"The harmful impacts of excessive stress on people’s health have been widely acknowledged, necessitating effective methods for its identification. Recognizing the importance of early stress detection and intervention, this research aims to contribute to the field of healthcare. To achieve this objective, this study classifies electrocardiogram (ECG) signals by assessing physio-psychological states, specifically stress and examines the role of music therapy in alleviating stress. ECG signals, recorded both before and after a music therapy session, were collected. Using signal processing techniques, essential features were extracted from these ECG signals, resulting in a more accurate identification of stress. Additionally, through experimentation and model evaluation, k-nearest Neighbors (KNN) and Classification and Regression Trees (CART) were determined to be the most effective models for this classification. Both models consistently yielded 90% accuracy. These identified extracted features and models are vital to effectively recognizing stress in ECG signals, offering valuable insights for future studies and clinical applications. This research contributes not only to the development of tools for stress detection but also to the understanding of the therapeutic impact of music.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"96 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139313433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Usman Tariq, Arslan Akram, Sobia Yaqoob, Mehwish Rasheed, Muhammad Salman Ali, Scholar
Unrestricted real-world facial photographs are arranged into specified age and gender groups using unprocessed face age and gender estimations. This explorer nation has now been prefabbed with earth-shattering enhancements due to its value in speedy real-world applications. However, conventional approaches utilizing unfiltered benchmarks show their incapacity to handle higher levels of variance in such unrestricted photographs. Convolutional Neural Networks (CNNs) enabled approaches have recently been widely used during categorization tasks due to their superior performance in facial psychotherapy. Dimension extraction and categorization are both components of the two-level CNN framework. The article extraction process extracts characteristics such as age and sexual identity, while the classification technique assigns the play photographs to the appropriate age and gender groups. We propose a ground-breaking end-to-end CNN swing in this implementation to achieve better and healthier age units and sexuality categorization of unfiltered real-world faces. We use a bulky person pretreatment approach to prepare and process the unfiltered real-world faces before they are input into the CNN poser in order to handle the significant discrepancies in those faces. When tested for sorting accuracy on the synoptical OIUAudience benchmark, an experimental result reveals that with us assistance achieves state-of-the-art achievement in both age gathering and gender arrangement. Our web is pretrained on an IMDb-WIKI with chanting labels, then fine-tuned on MORPH-II, and eventually on the OIUAudience (first) dataset's training set. In comparison to the best-reported results, the classification of age groups is improved by an excellent percentage (exact accuracy) and a very high percentage (validation accuracy), while the classification of genders is improved by an excellent percentage (exact correctness) and 93.42 percent (validation accuracy).
{"title":"Real-time Age and Gender Classification using VGG19","authors":"Muhammad Usman Tariq, Arslan Akram, Sobia Yaqoob, Mehwish Rasheed, Muhammad Salman Ali, Scholar","doi":"10.33140/amlai.04.02.04","DOIUrl":"https://doi.org/10.33140/amlai.04.02.04","url":null,"abstract":"Unrestricted real-world facial photographs are arranged into specified age and gender groups using unprocessed face age and gender estimations. This explorer nation has now been prefabbed with earth-shattering enhancements due to its value in speedy real-world applications. However, conventional approaches utilizing unfiltered benchmarks show their incapacity to handle higher levels of variance in such unrestricted photographs. Convolutional Neural Networks (CNNs) enabled approaches have recently been widely used during categorization tasks due to their superior performance in facial psychotherapy. Dimension extraction and categorization are both components of the two-level CNN framework. The article extraction process extracts characteristics such as age and sexual identity, while the classification technique assigns the play photographs to the appropriate age and gender groups. We propose a ground-breaking end-to-end CNN swing in this implementation to achieve better and healthier age units and sexuality categorization of unfiltered real-world faces. We use a bulky person pretreatment approach to prepare and process the unfiltered real-world faces before they are input into the CNN poser in order to handle the significant discrepancies in those faces. When tested for sorting accuracy on the synoptical OIUAudience benchmark, an experimental result reveals that with us assistance achieves state-of-the-art achievement in both age gathering and gender arrangement. Our web is pretrained on an IMDb-WIKI with chanting labels, then fine-tuned on MORPH-II, and eventually on the OIUAudience (first) dataset's training set. In comparison to the best-reported results, the classification of age groups is improved by an excellent percentage (exact accuracy) and a very high percentage (validation accuracy), while the classification of genders is improved by an excellent percentage (exact correctness) and 93.42 percent (validation accuracy).","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139315982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been an observable trend indicating a decline in students' attendance in the lectures. Several reasons for this have been proposed, and various measures to mitigate this have been suggested in the past. We implemented a novel approach in our instructional strategy to address this. Real-life clinical problems relevant to the topic were integrated into the lectures, and they were deliberately excluded from the pre-lecture handouts. During the lectures, students were motivated to post questions and actively engage in peer-peer and peer-tutor discussions. To evaluate the impact of this intervention, student attendance before and after was monitored, calculated and statistically analyzed to get the average attendance. The results revealed a significant increase in the average attendance, demonstrating a statistically meaningful difference (p<0.001). Commencing classes with pertinent patient problems or real case scenarios and stimulating student participation through open-ended discussions and interactions significantly enhanced the appeal of the lectures. This intervention holds great significance in alignment with the forthcoming clerkship training of the students in the undergraduate program since it prepares them for direct patient and real clinical problem encounters. Upon analyzing the class attendance average pre- and post-implementation of the intervention, a substantial improvement in overall attendance was observed.
{"title":"Enhancing Lecture Attendance: A Novel Approach Utilizing Clinical Case-Based Learning","authors":"Debadatta Panigrahi, Yehia S. Mohamed, Erum Khan","doi":"10.33140/amlai.04.02.02","DOIUrl":"https://doi.org/10.33140/amlai.04.02.02","url":null,"abstract":"There has been an observable trend indicating a decline in students' attendance in the lectures. Several reasons for this have been proposed, and various measures to mitigate this have been suggested in the past. We implemented a novel approach in our instructional strategy to address this. Real-life clinical problems relevant to the topic were integrated into the lectures, and they were deliberately excluded from the pre-lecture handouts. During the lectures, students were motivated to post questions and actively engage in peer-peer and peer-tutor discussions. To evaluate the impact of this intervention, student attendance before and after was monitored, calculated and statistically analyzed to get the average attendance. The results revealed a significant increase in the average attendance, demonstrating a statistically meaningful difference (p<0.001). Commencing classes with pertinent patient problems or real case scenarios and stimulating student participation through open-ended discussions and interactions significantly enhanced the appeal of the lectures. This intervention holds great significance in alignment with the forthcoming clerkship training of the students in the undergraduate program since it prepares them for direct patient and real clinical problem encounters. Upon analyzing the class attendance average pre- and post-implementation of the intervention, a substantial improvement in overall attendance was observed.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139318080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mayinuer Zipaer, Minoru Yoshida, Kazuyuki Matsumoto, K. Kita
It is difficult to obtain necessary information accurately from Social Networking Service (SNS) while raising children, and it is thought that there is a certain demand for the development of a system that presents appropriate information to users according to the child's developmental stage. There are still few examples of research on knowledge extraction that focuses on childcare. This research aims to develop a system that extracts and presents useful knowledge for people who are actually raising children, using texts about childcare posted on Twitter. In many systems, numbers in text data are just strings like words and are normalized to zero or simply ignored. In this paper, we created a set of tweet texts and a set of profiles created according to the developmental stages of infants from "0-year-old child" to "6-year-old child". For each set, we used ML algorithms such as NB (Naive Bayes), LR (Logistic Regression), ANN (Approximate Nearest Neighbor algorithms search), XGboost, RF (random forest), decision trees, and SVM (Support Vector Machine) to compare with BERT (Bidirectional Encoder Representations from Transformers), a neural language model, to construct a classification model that predicts numbers from "0" to "6" from sentences. The accuracy rate predicted by the BERT classifier was slightly higher than that of the NB, LR, and ANN, XGboost, and RF, decision trees and SVM classifiers, indicating that the BERT classification method was better.
{"title":"Using Machine Learning to Classify Information Related to Child Rearing of Infants from Twitter","authors":"Mayinuer Zipaer, Minoru Yoshida, Kazuyuki Matsumoto, K. Kita","doi":"10.33140/amlai.04.02.01","DOIUrl":"https://doi.org/10.33140/amlai.04.02.01","url":null,"abstract":"It is difficult to obtain necessary information accurately from Social Networking Service (SNS) while raising children, and it is thought that there is a certain demand for the development of a system that presents appropriate information to users according to the child's developmental stage. There are still few examples of research on knowledge extraction that focuses on childcare. This research aims to develop a system that extracts and presents useful knowledge for people who are actually raising children, using texts about childcare posted on Twitter. In many systems, numbers in text data are just strings like words and are normalized to zero or simply ignored. In this paper, we created a set of tweet texts and a set of profiles created according to the developmental stages of infants from \"0-year-old child\" to \"6-year-old child\". For each set, we used ML algorithms such as NB (Naive Bayes), LR (Logistic Regression), ANN (Approximate Nearest Neighbor algorithms search), XGboost, RF (random forest), decision trees, and SVM (Support Vector Machine) to compare with BERT (Bidirectional Encoder Representations from Transformers), a neural language model, to construct a classification model that predicts numbers from \"0\" to \"6\" from sentences. The accuracy rate predicted by the BERT classifier was slightly higher than that of the NB, LR, and ANN, XGboost, and RF, decision trees and SVM classifiers, indicating that the BERT classification method was better.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"142 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139336478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The digitalisation of banking services is certainly a positive note that reduces the fatigue of the customers by operating their transactions through their mobile gadgets and other electronic instruments using the internet. However, the research literature demonstrates that Digital initiatives not only has positive connotation but has created a digital divide among the demographics across the communities the studies also show evidence that there exists a huge gap among age groups, gender, income levels and socio-cultural groups in availing digital technologies in financial, especially the banking sector [1,2]. The present study is an attempt, which focuses on understanding the changing and existing phenomenon of banking with special reference to Digitalisation and the adoption process of these new technologies by customers. The study mainly takes the constructs from Technology acceptance Models to test whether there exists any digital divide among the demographics in the study
{"title":"Analysing the Digital Divide among the Demographics in the State of Telangana with Reference to the Adoption of Digital Banking Services","authors":"","doi":"10.33140/amlai.04.01.02","DOIUrl":"https://doi.org/10.33140/amlai.04.01.02","url":null,"abstract":"The digitalisation of banking services is certainly a positive note that reduces the fatigue of the customers by operating their transactions through their mobile gadgets and other electronic instruments using the internet. However, the research literature demonstrates that Digital initiatives not only has positive connotation but has created a digital divide among the demographics across the communities the studies also show evidence that there exists a huge gap among age groups, gender, income levels and socio-cultural groups in availing digital technologies in financial, especially the banking sector [1,2]. The present study is an attempt, which focuses on understanding the changing and existing phenomenon of banking with special reference to Digitalisation and the adoption process of these new technologies by customers. The study mainly takes the constructs from Technology acceptance Models to test whether there exists any digital divide among the demographics in the study","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116090626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Epileptic Seizure type diagnosis is done by clinician based on the symptoms during the episode and the Electroencephalograph (EEG) recording taken during inter-ictal period. But main challenge is, most of the time with the absence of any attendee, the patients are unable to explain the symptoms and not possible to find signature in inter-ictal EEG signal. Aims: This paper aims to analyze epileptic seizure Electro-encephalograph (EEG) signals to diagnose seizure in pre-ictal, ictal and inter-ictal stages and to classify into seven different classes. Methods: Temple University Hospital licensed dataset is used for study. From the seizure corpus, seven seizure types are pre- processed and segregated into pre-ictal, ictal and inter-ictal stages. The multi class classification performed using different machine and deep learning techniques such as K- Nearest Neighbor (KNN) and Random Forest, etc. Results: Multiclass classification of seven type of epileptic seizure with 20 channels, with 80-20 train-test ratio, is achieved 94.7%, 94.7%, 69.0% training accuracy and 94.46%, 94.46% 71.11% test accuracy by weighted KNN for pre-ictal, ictal and inter-ictal stages respectively. Conclusion: Seven epileptic seizure type classification using machine learning techniques carried out with MATLAB software and weighted KNN shows better accuracy comparatively.
{"title":"Seven Epileptic Seizure Type Classification in Pre-Ictal, Ictal and Inter-Ictal Stages using Machine Learning Techniques","authors":"","doi":"10.33140/amlai.04.01.01","DOIUrl":"https://doi.org/10.33140/amlai.04.01.01","url":null,"abstract":"Background: Epileptic Seizure type diagnosis is done by clinician based on the symptoms during the episode and the Electroencephalograph (EEG) recording taken during inter-ictal period. But main challenge is, most of the time with the absence of any attendee, the patients are unable to explain the symptoms and not possible to find signature in inter-ictal EEG signal. Aims: This paper aims to analyze epileptic seizure Electro-encephalograph (EEG) signals to diagnose seizure in pre-ictal, ictal and inter-ictal stages and to classify into seven different classes. Methods: Temple University Hospital licensed dataset is used for study. From the seizure corpus, seven seizure types are pre- processed and segregated into pre-ictal, ictal and inter-ictal stages. The multi class classification performed using different machine and deep learning techniques such as K- Nearest Neighbor (KNN) and Random Forest, etc. Results: Multiclass classification of seven type of epileptic seizure with 20 channels, with 80-20 train-test ratio, is achieved 94.7%, 94.7%, 69.0% training accuracy and 94.46%, 94.46% 71.11% test accuracy by weighted KNN for pre-ictal, ictal and inter-ictal stages respectively. Conclusion: Seven epileptic seizure type classification using machine learning techniques carried out with MATLAB software and weighted KNN shows better accuracy comparatively.","PeriodicalId":186756,"journal":{"name":"Advances in Machine Learning & Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130682951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}