Pub Date : 2024-05-07DOI: 10.33395/sinkron.v8i2.13648
Andreas Yudhistira, A. Fajar
In today’s digital era, the strategic integration of enterprise architecture frameworks with Big Data technologies is crucial in driving digital transformation, especially within the lending industry. This research aims to identify and analyze how The Open Group Architecture Framework (TOGAF) can be integrated with Big Data to enhance innovation, operational efficiency, and decision-making in the lending sector. This study examines Indonesian financial institutions using qualitative case studies, exploring the intricate practices, challenges, and benefits of the combination of TOGAF and Big Data. The qualitative methodology focuses on in-depth interviews and document analysis to gather contextual insights into the implementation dynamics and impacts of these technologies. Findings indicate that integrating TOGAF and Big Data not only streamlines workflows but also significantly enhances data security and risk management—critical elements in the lending industry. A vital outcome of this study is the development of a robust integration model that serves as a blueprint for companies in similar sectors to navigate their digital transformation journeys. Additionally, this research provides strategic recommendations to overcome integration and implementation challenges. These guidelines facilitate the transition to a more cohesive and strengthened digital architecture, equipping financial institutions to manage the complexities of modern digital economies effectively. Ultimately, this study delivers a comprehensive framework that enriches theoretical understanding and offers practical insights for effective technology integration in financial services.
{"title":"Integrating TOGAF and Big Data for Digital Transformation: Case Study on the Lending Industry","authors":"Andreas Yudhistira, A. Fajar","doi":"10.33395/sinkron.v8i2.13648","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13648","url":null,"abstract":"In today’s digital era, the strategic integration of enterprise architecture frameworks with Big Data technologies is crucial in driving digital transformation, especially within the lending industry. This research aims to identify and analyze how The Open Group Architecture Framework (TOGAF) can be integrated with Big Data to enhance innovation, operational efficiency, and decision-making in the lending sector. This study examines Indonesian financial institutions using qualitative case studies, exploring the intricate practices, challenges, and benefits of the combination of TOGAF and Big Data. The qualitative methodology focuses on in-depth interviews and document analysis to gather contextual insights into the implementation dynamics and impacts of these technologies. Findings indicate that integrating TOGAF and Big Data not only streamlines workflows but also significantly enhances data security and risk management—critical elements in the lending industry. A vital outcome of this study is the development of a robust integration model that serves as a blueprint for companies in similar sectors to navigate their digital transformation journeys. Additionally, this research provides strategic recommendations to overcome integration and implementation challenges. These guidelines facilitate the transition to a more cohesive and strengthened digital architecture, equipping financial institutions to manage the complexities of modern digital economies effectively. Ultimately, this study delivers a comprehensive framework that enriches theoretical understanding and offers practical insights for effective technology integration in financial services.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141004179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13448
Juwariyem Juwariyem, S. Sriyanto, Sri Lestari, Chairani Chairani
Stunting is a condition of failure to thrive in toddlers. This is caused by lack of nutrition over a long period of time, exposure to repeated infections, and lack of stimulation. This malnutrition condition is influenced by the mother's health during pregnancy, the health status of adolescents, as well as the economy and culture and the environment, such as sanitation and access to health services. To find out predictions of stunting, currently we still use a common method, namely Secondary Data Analysis, namely by conducting surveys and research to collect data regarding stunting. This data includes risk factors related to stunting, such as maternal nutritional status, child nutritional intake, access to health services, sanitation, and other socioeconomic factors. This secondary data analysis can provide an overview of the prevalence of stunting and the contributing factors. To overcome this, the right solution is needed, one solution that can be used is data mining techniques, where data mining can be used to carry out analysis and predictions for the future, and provide useful information for business or health needs. Based on this analysis, this research will use the Bagging method and Random Forest Algorithm to obtain the accuracy level of stunting predictions in toddlers. Bagging or Bootstrap Aggregation is an ensemble method that can improve classification by randomly combining classifications on the training dataset which can reduce variation and avoid overfitting. Random Forest is a powerful algorithm in machine learning that combines decisions from many independent decision trees to improve prediction performance and model stability. By combining the Bagging method and the Random Forest algorithm, it is hoped that it will be able to provide better stunting prediction results in toddlers. This research uses a dataset with a total of 10,001 data records, 7 attributes and 1 attribute class. Based on the test results using the Bagging method and the Random Forest algorithm in this research, the results obtained were class precision yes 91.72%, class recall yes 98.84%, class precision no 93.55%, class recall no 65.28%, and accuracy of 91.98%.
{"title":"Prediction of Stunting in Toddlers Using Bagging and Random Forest Algorithms","authors":"Juwariyem Juwariyem, S. Sriyanto, Sri Lestari, Chairani Chairani","doi":"10.33395/sinkron.v8i2.13448","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13448","url":null,"abstract":"Stunting is a condition of failure to thrive in toddlers. This is caused by lack of nutrition over a long period of time, exposure to repeated infections, and lack of stimulation. This malnutrition condition is influenced by the mother's health during pregnancy, the health status of adolescents, as well as the economy and culture and the environment, such as sanitation and access to health services. To find out predictions of stunting, currently we still use a common method, namely Secondary Data Analysis, namely by conducting surveys and research to collect data regarding stunting. This data includes risk factors related to stunting, such as maternal nutritional status, child nutritional intake, access to health services, sanitation, and other socioeconomic factors. This secondary data analysis can provide an overview of the prevalence of stunting and the contributing factors. To overcome this, the right solution is needed, one solution that can be used is data mining techniques, where data mining can be used to carry out analysis and predictions for the future, and provide useful information for business or health needs. Based on this analysis, this research will use the Bagging method and Random Forest Algorithm to obtain the accuracy level of stunting predictions in toddlers. Bagging or Bootstrap Aggregation is an ensemble method that can improve classification by randomly combining classifications on the training dataset which can reduce variation and avoid overfitting. Random Forest is a powerful algorithm in machine learning that combines decisions from many independent decision trees to improve prediction performance and model stability. By combining the Bagging method and the Random Forest algorithm, it is hoped that it will be able to provide better stunting prediction results in toddlers. This research uses a dataset with a total of 10,001 data records, 7 attributes and 1 attribute class. Based on the test results using the Bagging method and the Random Forest algorithm in this research, the results obtained were class precision yes 91.72%, class recall yes 98.84%, class precision no 93.55%, class recall no 65.28%, and accuracy of 91.98%.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"21 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13557
Febri Aldi, S. Sumijan
Skin diseases are increasing and becoming a very serious problem. Skin cancer in general there are 2, namely melanoma and non-melanoma. Cases that are often encountered are in non-melanoma types. A critical factor in the treatment of skin cancer is early diagnosis. Doctors usually use the biopsy method to detect skin cancer. Computer-based technology provides convenient, cheaper, and faster diagnosis of skin cancer symptoms. This study aims to identify the type of skin cancer. The data used in the study were 6 types of skin cancer, namely Basal Cell Carcinoma, Dermatofibroma, Melanoma, Nevus image, Pigmented Benign Keratosis image, or Vascular Lesion, with a total of 60 dermoscopy images obtained from the Kaggle site. Dermoscopy image processing begins with a pre-processing process, which converts RGB images to LAB. After that, segmentation is carried out to separate objects from the background. The method of extracting shape and texture features is used to obtain the characteristics of dermoscopy images. As many as 2 types of shape features, namely eccentricity and metric, and 4 types of texture features, namely contrast, correlation, energy, and homogeneity. The result of this study is that it can identify the type of skin cancer based on image features that have been extracted using a program from the Matlab application. The technique of extracting shape and texture features is proven to work well in identifying the type of skin cancer. In the future it is expected to use more data, and add color features in identifying dermoscopy images.
{"title":"Extraction of Shape and Texture Features of Dermoscopy Image for Skin Cancer Identification","authors":"Febri Aldi, S. Sumijan","doi":"10.33395/sinkron.v8i2.13557","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13557","url":null,"abstract":"Skin diseases are increasing and becoming a very serious problem. Skin cancer in general there are 2, namely melanoma and non-melanoma. Cases that are often encountered are in non-melanoma types. A critical factor in the treatment of skin cancer is early diagnosis. Doctors usually use the biopsy method to detect skin cancer. Computer-based technology provides convenient, cheaper, and faster diagnosis of skin cancer symptoms. This study aims to identify the type of skin cancer. The data used in the study were 6 types of skin cancer, namely Basal Cell Carcinoma, Dermatofibroma, Melanoma, Nevus image, Pigmented Benign Keratosis image, or Vascular Lesion, with a total of 60 dermoscopy images obtained from the Kaggle site. Dermoscopy image processing begins with a pre-processing process, which converts RGB images to LAB. After that, segmentation is carried out to separate objects from the background. The method of extracting shape and texture features is used to obtain the characteristics of dermoscopy images. As many as 2 types of shape features, namely eccentricity and metric, and 4 types of texture features, namely contrast, correlation, energy, and homogeneity. The result of this study is that it can identify the type of skin cancer based on image features that have been extracted using a program from the Matlab application. The technique of extracting shape and texture features is proven to work well in identifying the type of skin cancer. In the future it is expected to use more data, and add color features in identifying dermoscopy images.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"32 38","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13386
K. Azkiya, Muhamad Irsan, Muhammad Faris Fathoni
Smart Farm is an agricultural application that uses machine learning and cloud computing technology to improve efficiency in the farming process. Technological advancement and sustainable agriculture are two essential aspects of supporting global food security. This research investigates the implementation of App Engine and Cloud Storage in developing REST API in Smart Farm applications. By utilizing cloud computing technology, such as App Engine, and cloud storage, such as Cloud Storage, we can create efficient solutions to monitor and manage agriculture better. This research implements an App Engine and Cloud Storage to develop a REST API that allows Smart Farm application users to access data and control farming devices efficiently. The authors designed, developed, and tested this system to ensure optimal performance and reliability in agricultural data collection and distribution. This method has several significant advantages. First, App Engine allows for easy scalability, ensuring the system can handle increased data demand without disruption. Secondly, Cloud Storage provides secure and scalable storage for agricultural data, which can be accessed from anywhere. This provides easy and quick access to critical data for farmers. Moreover, the use of cloud technology also reduces infrastructure and maintenance costs. The developed system integrates the App Engine and Cloud Storage with the Smart Farm application. The App Engine is a processing engine that receives user requests via the REST API, processes the required data, and provides appropriate responses. Like image data, farm data is stored and managed on Cloud Storage. Users can access this data through the Smart Farm app or other devices, enabling better farming monitoring and decision-making.
智能农场是一个农业应用程序,它利用机器学习和云计算技术来提高耕作过程的效率。技术进步和可持续农业是支持全球粮食安全的两个重要方面。本研究调查了在智能农场应用程序中开发 REST API 时应用 App Engine 和云存储的实施情况。通过利用云计算技术(如 App Engine)和云存储(如 Cloud Storage),我们可以创建高效的解决方案,更好地监控和管理农业。本研究利用 App Engine 和云存储开发了 REST API,使智能农场应用程序用户能够高效地访问数据和控制农业设备。作者设计、开发并测试了该系统,以确保农业数据收集和分发的最佳性能和可靠性。这种方法有几个显著优势。首先,应用引擎允许轻松扩展,确保系统能够处理增加的数据需求而不会中断。其次,云存储为农业数据提供了安全和可扩展的存储空间,可从任何地方访问。这样,农民就可以方便快捷地访问关键数据。此外,使用云技术还能降低基础设施和维护成本。所开发的系统将应用程序引擎和云存储与智能农场应用程序集成在一起。应用引擎是一个处理引擎,通过 REST API 接收用户请求,处理所需数据,并提供适当的响应。与图像数据一样,农场数据也存储和管理在云存储上。用户可以通过智能农场应用程序或其他设备访问这些数据,从而实现更好的农场监控和决策。
{"title":"Implementation of App Engine and Cloud Storage as REST API on Smart Farm Application","authors":"K. Azkiya, Muhamad Irsan, Muhammad Faris Fathoni","doi":"10.33395/sinkron.v8i2.13386","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13386","url":null,"abstract":"Smart Farm is an agricultural application that uses machine learning and cloud computing technology to improve efficiency in the farming process. Technological advancement and sustainable agriculture are two essential aspects of supporting global food security. This research investigates the implementation of App Engine and Cloud Storage in developing REST API in Smart Farm applications. By utilizing cloud computing technology, such as App Engine, and cloud storage, such as Cloud Storage, we can create efficient solutions to monitor and manage agriculture better. This research implements an App Engine and Cloud Storage to develop a REST API that allows Smart Farm application users to access data and control farming devices efficiently. The authors designed, developed, and tested this system to ensure optimal performance and reliability in agricultural data collection and distribution. This method has several significant advantages. First, App Engine allows for easy scalability, ensuring the system can handle increased data demand without disruption. Secondly, Cloud Storage provides secure and scalable storage for agricultural data, which can be accessed from anywhere. This provides easy and quick access to critical data for farmers. Moreover, the use of cloud technology also reduces infrastructure and maintenance costs. The developed system integrates the App Engine and Cloud Storage with the Smart Farm application. The App Engine is a processing engine that receives user requests via the REST API, processes the required data, and provides appropriate responses. Like image data, farm data is stored and managed on Cloud Storage. Users can access this data through the Smart Farm app or other devices, enabling better farming monitoring and decision-making.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"89 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.12525
Rakhmat Purnomo, Tri Dharma Putra
Abstract: Operating system is a software acting as an interface between computer hardware and user. Operating system is known as a resource manager. The main responsibility of operating system is to handle resources of computer system. Scheduling is a key concept in computer multitasking and multiprocessing operating system design by switching the CPU among process. Shortest job first (SJF) and round robin are two wellknown algorithms in CPU processing. For shortest job first, this algorithm can be preemptived. In preemptive shortest job first, when a new process coming in, the process can be interupted. Where with round robin algorithm there will be time slices, context switching, or also called quantum, between process. In this journal we wil discuss comparative study between preemptive shortest job first and round robin algorithms. Three comparative studies will be discussed to understand these two algorithms more deeply. For all comparative study, the average waiting time and average turnaround time is more for round robin algorithm. In the first comparative study, we get average waiting time 52% more. For average turnaround time, 30% more. In second comparative analysis, we get 52 % average waiting time more and we get 35 % average turnaround time more. For third comparative analysis, average waiting time we get 50% more and for average turnaround time, we get 28% more. Thus it is concluded in our comparative study for these kind of data the preemptive shortest job first is more efficient then the round robin algorithm. Keywords: comparative study, premptive shortest job first algorithm, round robin algorithm, turn around time, average waiting time, time slice
{"title":"Comparative Study: Preemptive Shortest Job First and Round Robin Algorithms","authors":"Rakhmat Purnomo, Tri Dharma Putra","doi":"10.33395/sinkron.v8i2.12525","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.12525","url":null,"abstract":"Abstract: Operating system is a software acting as an interface between computer hardware and user. Operating system is known as a resource manager. The main responsibility of operating system is to handle resources of computer system. Scheduling is a key concept in computer multitasking and multiprocessing operating system design by switching the CPU among process. Shortest job first (SJF) and round robin are two wellknown algorithms in CPU processing. For shortest job first, this algorithm can be preemptived. In preemptive shortest job first, when a new process coming in, the process can be interupted. Where with round robin algorithm there will be time slices, context switching, or also called quantum, between process. In this journal we wil discuss comparative study between preemptive shortest job first and round robin algorithms. Three comparative studies will be discussed to understand these two algorithms more deeply. For all comparative study, the average waiting time and average turnaround time is more for round robin algorithm. In the first comparative study, we get average waiting time 52% more. For average turnaround time, 30% more. In second comparative analysis, we get 52 % average waiting time more and we get 35 % average turnaround time more. For third comparative analysis, average waiting time we get 50% more and for average turnaround time, we get 28% more. Thus it is concluded in our comparative study for these kind of data the preemptive shortest job first is more efficient then the round robin algorithm. \u0000 \u0000Keywords: comparative study, premptive shortest job first algorithm, round robin algorithm, turn around time, average waiting time, time slice","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"54 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13507
Fahma Inti Ilmawati, Kusrini Kusrini, Tonny Hidayat
In the field of facial expression recognition (FER), the availability of balanced and representative datasets is key to success in training accurate models. However, Facial Expression Recognition Challenge (FERC) datasets often face the challenge of class imbalance, where some facial expressions have a much smaller number of samples compared to others. This issue can result in biased and unsatisfactory model performance, especially in recognizing less common facial expressions. Data augmentation techniques are becoming an important strategy as they can expand the dataset by creating new variations of existing samples, thus increasing the variety and diversity of the data. Data augmentation can be used to increase the number of samples for less common facial expression classes, thus improving the model's ability to recognize and understand diverse facial expressions. The augmentation results are then combined with balancing techniques such as SMOTE coupled with undersampling to improve model performance. In this study, VGG19 is used to support better model performance. This will provide valuable guidelines for optimizing more advanced CNN models in the future and may encourage further research in creating more innovative augmentation techniques.
{"title":"Optimizing Facial Expression Recognition with Image Augmentation Techniques: VGG19 Approach on FERC Dataset","authors":"Fahma Inti Ilmawati, Kusrini Kusrini, Tonny Hidayat","doi":"10.33395/sinkron.v8i2.13507","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13507","url":null,"abstract":"In the field of facial expression recognition (FER), the availability of balanced and representative datasets is key to success in training accurate models. However, Facial Expression Recognition Challenge (FERC) datasets often face the challenge of class imbalance, where some facial expressions have a much smaller number of samples compared to others. This issue can result in biased and unsatisfactory model performance, especially in recognizing less common facial expressions. Data augmentation techniques are becoming an important strategy as they can expand the dataset by creating new variations of existing samples, thus increasing the variety and diversity of the data. Data augmentation can be used to increase the number of samples for less common facial expression classes, thus improving the model's ability to recognize and understand diverse facial expressions. The augmentation results are then combined with balancing techniques such as SMOTE coupled with undersampling to improve model performance. In this study, VGG19 is used to support better model performance. This will provide valuable guidelines for optimizing more advanced CNN models in the future and may encourage further research in creating more innovative augmentation techniques.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"21 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140361102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v9i2.13387
Derryl Reflando Tarigan, Muhammad Irsan, Muhammad Faris Fathoni
The development of Cloud Computing technology has progressed rapidly in recent years especially with the emergence of Google Cloud Services (GCR) which has become one of the leading cloud service providers. This research focuses on the OutfitHub application, which plays a role in assisting users in determining clothing styles using a personalized recommendation system. In developing this application, the research seeks to implement cloud computing services to improve application performance. The purpose of this research is to implement Cloud Computing, especially Cloud run and Cloud Storage services as Rest API in the Outfithub application. By implementing these two services, it is expected that there is no need to pay attention to the problem of Storage needs that are growing at any time and no need to worry about the need for server configuration because both of these things will be fully done by GCR. Implementing Cloud Computing will provide a variety of benefits in addition to those previously mentioned, such as: being able to access data from anywhere and at any time. This implementation is expected to be able to run OutfitHub applications in a Cloud environment in a serverless computing manner without requiring the design of unnecessary virtual machines.
{"title":"Implementation of Cloud Run and Cloud Storage as REST API Service on OutfitHub Application","authors":"Derryl Reflando Tarigan, Muhammad Irsan, Muhammad Faris Fathoni","doi":"10.33395/sinkron.v9i2.13387","DOIUrl":"https://doi.org/10.33395/sinkron.v9i2.13387","url":null,"abstract":"The development of Cloud Computing technology has progressed rapidly in recent years especially with the emergence of Google Cloud Services (GCR) which has become one of the leading cloud service providers. This research focuses on the OutfitHub application, which plays a role in assisting users in determining clothing styles using a personalized recommendation system. In developing this application, the research seeks to implement cloud computing services to improve application performance. The purpose of this research is to implement Cloud Computing, especially Cloud run and Cloud Storage services as Rest API in the Outfithub application. By implementing these two services, it is expected that there is no need to pay attention to the problem of Storage needs that are growing at any time and no need to worry about the need for server configuration because both of these things will be fully done by GCR. Implementing Cloud Computing will provide a variety of benefits in addition to those previously mentioned, such as: being able to access data from anywhere and at any time. This implementation is expected to be able to run OutfitHub applications in a Cloud environment in a serverless computing manner without requiring the design of unnecessary virtual machines.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"19 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140361375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13608
Romanus Damanik, Muhammad Zarlis, Zakarias Situmorang
Fast and accurate diagnosis is very important for kidney disease. This research conducts and analyzes by using Nguyen Widrow Algorithm in Back Propagation method in artificial neural network for kidney disease diagnosis with the aim to improve the accuracy in predicting and time efficiency in diagnosing. The Nguyen Widrow algorithm is very capable of accelerating convergence and stabilizing the learning process in artificial neural networks, which is also expected to present a meaningful contribution to the handling of health data. This study uses MATLAB as a platform for algorithm implementation and a dataset of medical records of kidney disease patients collected from a hospital that specializes in treating kidney disease patients. The data pre-processing and artificial neural network modeling stages use the Nguyen Widrow algorithm, while the model training process uses the Back Propagation method. The results showed that the Nguyen Widrow algorithm was able to improve the accuracy of predicting someone suffering from kidney disease compared to using only the Back Propagation method. Analysis of the performance of the model shows a significant improvement in stability and convergence speed during the learning process. This indicates that data processing and medical decision making becomes more efficient. On the other hand, this research also studied the challenges and limitations that will be faced in terms of implementation of the Nguyen Widrow algorithm. Also the sensitivity of the initialization parameters, the need for the quality of the dataset to be used in training the model.This research reveals the ability of the Nguyen Widrow algorithm to improve the performance of artificial neural networks in diagnosing kidney disease. By implementing this algorithm in MATLAB, the results show that the use of the latest data processing technology and analysis tools can provide significant improvements in accuracy and efficiency in the medical field. In addition, this research is expected to provide a new direction in the development of machine learning algorithms for applications in the healthcare field, especially for diagnosing kidney disease. By further utilizing this technology, it contributes significantly to improving the quality of healthcare and treatment outcomes for patients suffering from kidney disease.
{"title":"Analysis of The Use of Nguyen Widrow Algorithm in Backpropagation in Kidney Disease","authors":"Romanus Damanik, Muhammad Zarlis, Zakarias Situmorang","doi":"10.33395/sinkron.v8i2.13608","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13608","url":null,"abstract":"Fast and accurate diagnosis is very important for kidney disease. This research conducts and analyzes by using Nguyen Widrow Algorithm in Back Propagation method in artificial neural network for kidney disease diagnosis with the aim to improve the accuracy in predicting and time efficiency in diagnosing. The Nguyen Widrow algorithm is very capable of accelerating convergence and stabilizing the learning process in artificial neural networks, which is also expected to present a meaningful contribution to the handling of health data. This study uses MATLAB as a platform for algorithm implementation and a dataset of medical records of kidney disease patients collected from a hospital that specializes in treating kidney disease patients. The data pre-processing and artificial neural network modeling stages use the Nguyen Widrow algorithm, while the model training process uses the Back Propagation method. The results showed that the Nguyen Widrow algorithm was able to improve the accuracy of predicting someone suffering from kidney disease compared to using only the Back Propagation method. Analysis of the performance of the model shows a significant improvement in stability and convergence speed during the learning process. This indicates that data processing and medical decision making becomes more efficient. On the other hand, this research also studied the challenges and limitations that will be faced in terms of implementation of the Nguyen Widrow algorithm. Also the sensitivity of the initialization parameters, the need for the quality of the dataset to be used in training the model.This research reveals the ability of the Nguyen Widrow algorithm to improve the performance of artificial neural networks in diagnosing kidney disease. By implementing this algorithm in MATLAB, the results show that the use of the latest data processing technology and analysis tools can provide significant improvements in accuracy and efficiency in the medical field. In addition, this research is expected to provide a new direction in the development of machine learning algorithms for applications in the healthcare field, especially for diagnosing kidney disease. By further utilizing this technology, it contributes significantly to improving the quality of healthcare and treatment outcomes for patients suffering from kidney disease.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"4 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diabetes is a condition caused by an imbalance between the need for insulin in the body and insufficient insulin production by the pancreas, causing an increase in blood sugar concentration. This study aims to find the best classification performance on diabetes datasets with the LightGBM method. The dataset used consists of 768 rows and 9 columns, with target values of 0 and 1. In this study, resampling is applied to overcome data imbalance using SMOTE and perform hyperparameter optimization. Model evaluation is performed using confusion matrix and various metrics such as accuracy, recall, precision and f1-score. This research conducted several tests. In hyperparameter optimization tests using GridSearchCV and RandomSearchCV, the LightGBM method showed good performance. In tests that apply data resampling, the LightGBM method achieves the highest accuracy, namely the LightGBM method with GridSearchCV optimization with the highest accuracy reaching 84%, while LightGBM with RandomSearchCV optimization reaches 82% accuracy.
{"title":"Diabetes Disease Detection Classification Using Light Gradient Boosting (LightGBM) With Hyperparameter Tuning","authors":"Elisa Ramadanti, Devi Aprilya Dinathi, Christianskaditya Christianskaditya, Didih Rizki Chandranegara","doi":"10.33395/sinkron.v8i2.13530","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13530","url":null,"abstract":"Diabetes is a condition caused by an imbalance between the need for insulin in the body and insufficient insulin production by the pancreas, causing an increase in blood sugar concentration. This study aims to find the best classification performance on diabetes datasets with the LightGBM method. The dataset used consists of 768 rows and 9 columns, with target values of 0 and 1. In this study, resampling is applied to overcome data imbalance using SMOTE and perform hyperparameter optimization. Model evaluation is performed using confusion matrix and various metrics such as accuracy, recall, precision and f1-score. This research conducted several tests. In hyperparameter optimization tests using GridSearchCV and RandomSearchCV, the LightGBM method showed good performance. In tests that apply data resampling, the LightGBM method achieves the highest accuracy, namely the LightGBM method with GridSearchCV optimization with the highest accuracy reaching 84%, while LightGBM with RandomSearchCV optimization reaches 82% accuracy.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"102 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-31DOI: 10.33395/sinkron.v8i2.13589
Clara Edrea Evelyna Sony Putri, Ajib Susanto
In an era of rapid technological development, application development has become common, especially in coding. However, most websites do not give appropriate assignments and instructors to help improve coding skills. Because of this, the Bengkel Koding of Dian Nuswantoro University Semarang is a solution to improving the quality of coding learning. This research aims to identify the shortcomings in the website and ensure that the website functions as expected by the users. By testing the application like this, researchers can know which problems can affect the user experience. This research uses one of the frequently used tests, namely Black Box testing. The objective is to verify that the system's functions, inputs, and outputs align with the specified requirements. In addition to the Black Box method, this research uses a technique called Boundary Value Analysis. This technique is to identify errors or bugs that can affect the user experience by focusing on the input value boundary. The test results will use a quality ratio that will determine whether or not the system is suitable for use by users. Through 30 test cases, most website functions have been tested properly, with the feasibility level reaching 83.333%. Nonetheless, five errors or bugs were still found, emphasizing the need for further improvement. The results of this study provide valuable insights into improving the quality and convenience of users in accessing the Bengkel Koding website.
{"title":"Feasibility Analysis of Bengkel Koding Website Using Black Box Testing and Boundary Value Analysis","authors":"Clara Edrea Evelyna Sony Putri, Ajib Susanto","doi":"10.33395/sinkron.v8i2.13589","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13589","url":null,"abstract":"In an era of rapid technological development, application development has become common, especially in coding. However, most websites do not give appropriate assignments and instructors to help improve coding skills. Because of this, the Bengkel Koding of Dian Nuswantoro University Semarang is a solution to improving the quality of coding learning. This research aims to identify the shortcomings in the website and ensure that the website functions as expected by the users. By testing the application like this, researchers can know which problems can affect the user experience. This research uses one of the frequently used tests, namely Black Box testing. The objective is to verify that the system's functions, inputs, and outputs align with the specified requirements. In addition to the Black Box method, this research uses a technique called Boundary Value Analysis. This technique is to identify errors or bugs that can affect the user experience by focusing on the input value boundary. The test results will use a quality ratio that will determine whether or not the system is suitable for use by users. Through 30 test cases, most website functions have been tested properly, with the feasibility level reaching 83.333%. Nonetheless, five errors or bugs were still found, emphasizing the need for further improvement. The results of this study provide valuable insights into improving the quality and convenience of users in accessing the Bengkel Koding website.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"55 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}