首页 > 最新文献

Sinkron最新文献

英文 中文
Integrating TOGAF and Big Data for Digital Transformation: Case Study on the Lending Industry 整合 TOGAF 和大数据,实现数字化转型:贷款行业案例研究
Pub Date : 2024-05-07 DOI: 10.33395/sinkron.v8i2.13648
Andreas Yudhistira, A. Fajar
In today’s digital era, the strategic integration of enterprise architecture frameworks with Big Data technologies is crucial in driving digital transformation, especially within the lending industry. This research aims to identify and analyze how The Open Group Architecture Framework (TOGAF) can be integrated with Big Data to enhance innovation, operational efficiency, and decision-making in the lending sector. This study examines Indonesian financial institutions using qualitative case studies, exploring the intricate practices, challenges, and benefits of the combination of TOGAF and Big Data. The qualitative methodology focuses on in-depth interviews and document analysis to gather contextual insights into the implementation dynamics and impacts of these technologies. Findings indicate that integrating TOGAF and Big Data not only streamlines workflows but also significantly enhances data security and risk management—critical elements in the lending industry. A vital outcome of this study is the development of a robust integration model that serves as a blueprint for companies in similar sectors to navigate their digital transformation journeys. Additionally, this research provides strategic recommendations to overcome integration and implementation challenges. These guidelines facilitate the transition to a more cohesive and strengthened digital architecture, equipping financial institutions to manage the complexities of modern digital economies effectively. Ultimately, this study delivers a comprehensive framework that enriches theoretical understanding and offers practical insights for effective technology integration in financial services.
在当今的数字化时代,企业架构框架与大数据技术的战略整合对于推动数字化转型至关重要,尤其是在贷款行业。本研究旨在确定和分析如何将开放组架构框架(TOGAF)与大数据相结合,以提高贷款行业的创新、运营效率和决策水平。本研究通过定性案例研究对印度尼西亚的金融机构进行考察,探索 TOGAF 与大数据相结合的复杂实践、挑战和优势。定性方法侧重于深入访谈和文档分析,以收集有关这些技术的实施动态和影响的背景资料。研究结果表明,整合 TOGAF 和大数据不仅能简化工作流程,还能显著提高数据安全性和风险管理,这些都是贷款行业的关键要素。本研究的一个重要成果是开发了一个强大的集成模型,可作为类似行业的公司进行数字化转型的蓝图。此外,本研究还提供了克服集成和实施挑战的战略建议。这些指导方针有助于向更具凝聚力、更强大的数字架构过渡,使金融机构能够有效管理现代数字经济的复杂性。最终,本研究提供了一个全面的框架,丰富了理论理解,并为金融服务领域有效的技术整合提供了实用的见解。
{"title":"Integrating TOGAF and Big Data for Digital Transformation: Case Study on the Lending Industry","authors":"Andreas Yudhistira, A. Fajar","doi":"10.33395/sinkron.v8i2.13648","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13648","url":null,"abstract":"In today’s digital era, the strategic integration of enterprise architecture frameworks with Big Data technologies is crucial in driving digital transformation, especially within the lending industry. This research aims to identify and analyze how The Open Group Architecture Framework (TOGAF) can be integrated with Big Data to enhance innovation, operational efficiency, and decision-making in the lending sector. This study examines Indonesian financial institutions using qualitative case studies, exploring the intricate practices, challenges, and benefits of the combination of TOGAF and Big Data. The qualitative methodology focuses on in-depth interviews and document analysis to gather contextual insights into the implementation dynamics and impacts of these technologies. Findings indicate that integrating TOGAF and Big Data not only streamlines workflows but also significantly enhances data security and risk management—critical elements in the lending industry. A vital outcome of this study is the development of a robust integration model that serves as a blueprint for companies in similar sectors to navigate their digital transformation journeys. Additionally, this research provides strategic recommendations to overcome integration and implementation challenges. These guidelines facilitate the transition to a more cohesive and strengthened digital architecture, equipping financial institutions to manage the complexities of modern digital economies effectively. Ultimately, this study delivers a comprehensive framework that enriches theoretical understanding and offers practical insights for effective technology integration in financial services.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141004179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Stunting in Toddlers Using Bagging and Random Forest Algorithms 用袋式算法和随机森林算法预测幼儿发育迟缓问题
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13448
Juwariyem Juwariyem, S. Sriyanto, Sri Lestari, Chairani Chairani
Stunting is a condition of failure to thrive in toddlers. This is caused by lack of nutrition over a long period of time, exposure to repeated infections, and lack of stimulation. This malnutrition condition is influenced by the mother's health during pregnancy, the health status of adolescents, as well as the economy and culture and the environment, such as sanitation and access to health services. To find out predictions of stunting, currently we still use a common method, namely Secondary Data Analysis, namely by conducting surveys and research to collect data regarding stunting. This data includes risk factors related to stunting, such as maternal nutritional status, child nutritional intake, access to health services, sanitation, and other socioeconomic factors. This secondary data analysis can provide an overview of the prevalence of stunting and the contributing factors. To overcome this, the right solution is needed, one solution that can be used is data mining techniques, where data mining can be used to carry out analysis and predictions for the future, and provide useful information for business or health needs. Based on this analysis, this research will use the Bagging method and Random Forest Algorithm to obtain the accuracy level of stunting predictions in toddlers. Bagging or Bootstrap Aggregation is an ensemble method that can improve classification by randomly combining classifications on the training dataset which can reduce variation and avoid overfitting. Random Forest is a powerful algorithm in machine learning that combines decisions from many independent decision trees to improve prediction performance and model stability. By combining the Bagging method and the Random Forest algorithm, it is hoped that it will be able to provide better stunting prediction results in toddlers. This research uses a dataset with a total of 10,001 data records, 7 attributes and 1 attribute class. Based on the test results using the Bagging method and the Random Forest algorithm in this research, the results obtained were class precision yes 91.72%, class recall yes 98.84%, class precision no 93.55%, class recall no 65.28%, and accuracy of 91.98%.
发育迟缓是指幼儿无法茁壮成长。这是由于长期缺乏营养、反复感染和缺乏刺激造成的。这种营养不良状况受母亲怀孕期间的健康状况、青少年的健康状况以及经济、文化和环境(如卫生条件和获得医疗服务的机会)的影响。为了解发育迟缓的预测情况,目前我们仍在使用一种常见的方法,即二级数据分析,即通过调查和研究来收集有关发育迟缓的数据。这些数据包括与发育迟缓有关的风险因素,如孕产妇营养状况、儿童营养摄入、获得医疗服务的机会、卫生条件和其他社会经济因素。通过这种二手数据分析,可以对发育迟缓的发生率和诱因有一个大致的了解。要克服这一问题,需要正确的解决方案,其中一个解决方案就是数据挖掘技术,数据挖掘可用于对未来进行分析和预测,并为商业或健康需求提供有用的信息。在此分析基础上,本研究将使用 Bagging 方法和随机森林算法来获得幼儿发育迟缓预测的准确度。Bagging 或 Bootstrap Aggregation 是一种集合方法,可通过随机组合训练数据集上的分类来改进分类,从而减少差异并避免过度拟合。随机森林是机器学习中一种强大的算法,它将许多独立决策树的决策结合起来,以提高预测性能和模型稳定性。通过将 Bagging 方法与随机森林算法相结合,希望能提供更好的幼儿发育迟缓预测结果。本研究使用的数据集共有 10,001 条数据记录、7 个属性和 1 个属性类。根据本研究使用 Bagging 方法和随机森林算法的测试结果,得到的结果是类精确度为 91.72%,类召回率为 98.84%,类精确度为 93.55%,类召回率为 65.28%,准确率为 91.98%。
{"title":"Prediction of Stunting in Toddlers Using Bagging and Random Forest Algorithms","authors":"Juwariyem Juwariyem, S. Sriyanto, Sri Lestari, Chairani Chairani","doi":"10.33395/sinkron.v8i2.13448","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13448","url":null,"abstract":"Stunting is a condition of failure to thrive in toddlers. This is caused by lack of nutrition over a long period of time, exposure to repeated infections, and lack of stimulation. This malnutrition condition is influenced by the mother's health during pregnancy, the health status of adolescents, as well as the economy and culture and the environment, such as sanitation and access to health services. To find out predictions of stunting, currently we still use a common method, namely Secondary Data Analysis, namely by conducting surveys and research to collect data regarding stunting. This data includes risk factors related to stunting, such as maternal nutritional status, child nutritional intake, access to health services, sanitation, and other socioeconomic factors. This secondary data analysis can provide an overview of the prevalence of stunting and the contributing factors. To overcome this, the right solution is needed, one solution that can be used is data mining techniques, where data mining can be used to carry out analysis and predictions for the future, and provide useful information for business or health needs. Based on this analysis, this research will use the Bagging method and Random Forest Algorithm to obtain the accuracy level of stunting predictions in toddlers. Bagging or Bootstrap Aggregation is an ensemble method that can improve classification by randomly combining classifications on the training dataset which can reduce variation and avoid overfitting. Random Forest is a powerful algorithm in machine learning that combines decisions from many independent decision trees to improve prediction performance and model stability. By combining the Bagging method and the Random Forest algorithm, it is hoped that it will be able to provide better stunting prediction results in toddlers. This research uses a dataset with a total of 10,001 data records, 7 attributes and 1 attribute class. Based on the test results using the Bagging method and the Random Forest algorithm in this research, the results obtained were class precision yes 91.72%, class recall yes 98.84%, class precision no 93.55%, class recall no 65.28%, and accuracy of 91.98%.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"21 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extraction of Shape and Texture Features of Dermoscopy Image for Skin Cancer Identification 提取皮肤镜图像的形状和纹理特征以识别皮肤癌
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13557
Febri Aldi, S. Sumijan
Skin diseases are increasing and becoming a very serious problem. Skin cancer in general there are 2, namely melanoma and non-melanoma. Cases that are often encountered are in non-melanoma types. A critical factor in the treatment of skin cancer is early diagnosis. Doctors usually use the biopsy method to detect skin cancer. Computer-based technology provides convenient, cheaper, and faster diagnosis of skin cancer symptoms. This study aims to identify the type of skin cancer. The data used in the study were 6 types of skin cancer, namely Basal Cell Carcinoma, Dermatofibroma, Melanoma, Nevus image, Pigmented Benign Keratosis image, or Vascular Lesion, with a total of 60 dermoscopy images obtained from the Kaggle site. Dermoscopy image processing begins with a pre-processing process, which converts RGB images to LAB. After that, segmentation is carried out to separate objects from the background. The method of extracting shape and texture features is used to obtain the characteristics of dermoscopy images. As many as 2 types of shape features, namely eccentricity and metric, and 4 types of texture features, namely contrast, correlation, energy, and homogeneity. The result of this study is that it can identify the type of skin cancer based on image features that have been extracted using a program from the Matlab application. The technique of extracting shape and texture features is proven to work well in identifying the type of skin cancer. In the future it is expected to use more data, and add color features in identifying dermoscopy images.
皮肤病日益增多,已成为一个非常严重的问题。皮肤癌一般有两种,即黑色素瘤和非黑色素瘤。经常遇到的病例属于非黑色素瘤类型。治疗皮肤癌的关键因素是早期诊断。医生通常使用活组织检查法来检测皮肤癌。计算机技术为诊断皮肤癌症状提供了方便、便宜和快捷的方法。本研究旨在确定皮肤癌的类型。研究使用的数据是从 Kaggle 网站获取的 6 种皮肤癌类型,即基底细胞癌、皮肤纤维瘤、黑色素瘤、痣图像、色素性良性角化病图像或血管病变,共 60 张皮肤镜图像。皮肤镜图像处理首先要进行预处理,将 RGB 图像转换为 LAB 图像。然后进行分割,将物体从背景中分离出来。提取形状和纹理特征的方法用于获取皮肤镜图像的特征。多达 2 种形状特征,即偏心率和度量,以及 4 种纹理特征,即对比度、相关性、能量和均匀性。这项研究的结果是,根据使用 Matlab 应用程序提取的图像特征,可以识别皮肤癌的类型。事实证明,提取形状和纹理特征的技术能很好地识别皮肤癌的类型。未来,它有望使用更多数据,并在识别皮肤镜图像时添加颜色特征。
{"title":"Extraction of Shape and Texture Features of Dermoscopy Image for Skin Cancer Identification","authors":"Febri Aldi, S. Sumijan","doi":"10.33395/sinkron.v8i2.13557","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13557","url":null,"abstract":"Skin diseases are increasing and becoming a very serious problem. Skin cancer in general there are 2, namely melanoma and non-melanoma. Cases that are often encountered are in non-melanoma types. A critical factor in the treatment of skin cancer is early diagnosis. Doctors usually use the biopsy method to detect skin cancer. Computer-based technology provides convenient, cheaper, and faster diagnosis of skin cancer symptoms. This study aims to identify the type of skin cancer. The data used in the study were 6 types of skin cancer, namely Basal Cell Carcinoma, Dermatofibroma, Melanoma, Nevus image, Pigmented Benign Keratosis image, or Vascular Lesion, with a total of 60 dermoscopy images obtained from the Kaggle site. Dermoscopy image processing begins with a pre-processing process, which converts RGB images to LAB. After that, segmentation is carried out to separate objects from the background. The method of extracting shape and texture features is used to obtain the characteristics of dermoscopy images. As many as 2 types of shape features, namely eccentricity and metric, and 4 types of texture features, namely contrast, correlation, energy, and homogeneity. The result of this study is that it can identify the type of skin cancer based on image features that have been extracted using a program from the Matlab application. The technique of extracting shape and texture features is proven to work well in identifying the type of skin cancer. In the future it is expected to use more data, and add color features in identifying dermoscopy images.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"32 38","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140358141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of App Engine and Cloud Storage as REST API on Smart Farm Application 在智能农场应用程序中将应用程序引擎和云存储作为 REST API 实施
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13386
K. Azkiya, Muhamad Irsan, Muhammad Faris Fathoni
Smart Farm is an agricultural application that uses machine learning and cloud computing technology to improve efficiency in the farming process. Technological advancement and sustainable agriculture are two essential aspects of supporting global food security. This research investigates the implementation of App Engine and Cloud Storage in developing REST API in Smart Farm applications. By utilizing cloud computing technology, such as App Engine, and cloud storage, such as Cloud Storage, we can create efficient solutions to monitor and manage agriculture better. This research implements an App Engine and Cloud Storage to develop a REST API that allows Smart Farm application users to access data and control farming devices efficiently. The authors designed, developed, and tested this system to ensure optimal performance and reliability in agricultural data collection and distribution. This method has several significant advantages. First, App Engine allows for easy scalability, ensuring the system can handle increased data demand without disruption. Secondly, Cloud Storage provides secure and scalable storage for agricultural data, which can be accessed from anywhere. This provides easy and quick access to critical data for farmers. Moreover, the use of cloud technology also reduces infrastructure and maintenance costs. The developed system integrates the App Engine and Cloud Storage with the Smart Farm application. The App Engine is a processing engine that receives user requests via the REST API, processes the required data, and provides appropriate responses. Like image data, farm data is stored and managed on Cloud Storage. Users can access this data through the Smart Farm app or other devices, enabling better farming monitoring and decision-making.
智能农场是一个农业应用程序,它利用机器学习和云计算技术来提高耕作过程的效率。技术进步和可持续农业是支持全球粮食安全的两个重要方面。本研究调查了在智能农场应用程序中开发 REST API 时应用 App Engine 和云存储的实施情况。通过利用云计算技术(如 App Engine)和云存储(如 Cloud Storage),我们可以创建高效的解决方案,更好地监控和管理农业。本研究利用 App Engine 和云存储开发了 REST API,使智能农场应用程序用户能够高效地访问数据和控制农业设备。作者设计、开发并测试了该系统,以确保农业数据收集和分发的最佳性能和可靠性。这种方法有几个显著优势。首先,应用引擎允许轻松扩展,确保系统能够处理增加的数据需求而不会中断。其次,云存储为农业数据提供了安全和可扩展的存储空间,可从任何地方访问。这样,农民就可以方便快捷地访问关键数据。此外,使用云技术还能降低基础设施和维护成本。所开发的系统将应用程序引擎和云存储与智能农场应用程序集成在一起。应用引擎是一个处理引擎,通过 REST API 接收用户请求,处理所需数据,并提供适当的响应。与图像数据一样,农场数据也存储和管理在云存储上。用户可以通过智能农场应用程序或其他设备访问这些数据,从而实现更好的农场监控和决策。
{"title":"Implementation of App Engine and Cloud Storage as REST API on Smart Farm Application","authors":"K. Azkiya, Muhamad Irsan, Muhammad Faris Fathoni","doi":"10.33395/sinkron.v8i2.13386","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13386","url":null,"abstract":"Smart Farm is an agricultural application that uses machine learning and cloud computing technology to improve efficiency in the farming process. Technological advancement and sustainable agriculture are two essential aspects of supporting global food security. This research investigates the implementation of App Engine and Cloud Storage in developing REST API in Smart Farm applications. By utilizing cloud computing technology, such as App Engine, and cloud storage, such as Cloud Storage, we can create efficient solutions to monitor and manage agriculture better. This research implements an App Engine and Cloud Storage to develop a REST API that allows Smart Farm application users to access data and control farming devices efficiently. The authors designed, developed, and tested this system to ensure optimal performance and reliability in agricultural data collection and distribution. This method has several significant advantages. First, App Engine allows for easy scalability, ensuring the system can handle increased data demand without disruption. Secondly, Cloud Storage provides secure and scalable storage for agricultural data, which can be accessed from anywhere. This provides easy and quick access to critical data for farmers. Moreover, the use of cloud technology also reduces infrastructure and maintenance costs. The developed system integrates the App Engine and Cloud Storage with the Smart Farm application. The App Engine is a processing engine that receives user requests via the REST API, processes the required data, and provides appropriate responses. Like image data, farm data is stored and managed on Cloud Storage. Users can access this data through the Smart Farm app or other devices, enabling better farming monitoring and decision-making.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"89 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative Study: Preemptive Shortest Job First and Round Robin Algorithms 比较研究:抢占式最短工作先行算法和循环罗宾算法
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.12525
Rakhmat Purnomo, Tri Dharma Putra
Abstract: Operating system is a software acting as an interface between computer hardware and user. Operating system is known as a resource  manager. The main responsibility of operating system is to handle resources of computer system. Scheduling is a key concept in computer multitasking and multiprocessing operating system design by switching the CPU among process. Shortest job first (SJF) and round robin are two wellknown algorithms in CPU processing. For shortest job first, this algorithm can be preemptived. In preemptive shortest job first, when a new process coming in, the process can be interupted. Where with round robin algorithm there will be time slices, context switching, or also called quantum, between process. In this journal we wil discuss comparative study between preemptive shortest job first and round robin algorithms. Three comparative studies will be discussed to understand these two algorithms more deeply. For all comparative study, the average waiting time and average turnaround time is more for round robin algorithm. In the first comparative study, we get average waiting time 52% more. For average turnaround time, 30% more. In second comparative analysis, we get 52 % average waiting time more and we get 35 % average turnaround time more. For third comparative analysis, average waiting time we get 50% more and for average turnaround time, we get 28% more. Thus it is concluded in our comparative study for these kind of data the preemptive shortest job first is more efficient then the round robin algorithm.   Keywords: comparative study, premptive shortest job first algorithm, round robin algorithm, turn around time, average waiting time, time slice
摘要: 操作系统是一种软件,是计算机硬件与用户之间的接口。操作系统被称为资源管理器。操作系统的主要职责是处理计算机系统的资源。调度是计算机多任务和多进程操作系统设计中的一个关键概念,它通过在进程间切换中央处理器来实现。最短工作优先(SJF)和循环是 CPU 处理中两种著名的算法。对于最短作业优先算法,可以采用抢占式算法。在抢占式最短作业优先算法中,当有新进程进入时,该进程可以被中断。而在循环算法中,进程之间会有时间片、上下文切换或量子切换。在本期刊中,我们将讨论抢占式最短作业优先算法和循环算法之间的比较研究。为了更深入地理解这两种算法,我们将讨论三项比较研究。在所有比较研究中,轮转算法的平均等待时间和平均周转时间更长。在第一项比较研究中,我们得到的平均等待时间多出 52%。平均周转时间多 30%。在第二项比较分析中,平均等待时间多出 52%,平均周转时间多出 35%。在第三次比较分析中,平均等待时间增加了 50%,平均周转时间增加了 28%。因此,我们的比较研究得出结论,对于此类数据,抢先执行最短作业优先算法比循环算法更有效。 关键词:比较研究、抢先最短作业优先算法、循环算法、周转时间、平均等待时间、时间片
{"title":"Comparative Study: Preemptive Shortest Job First and Round Robin Algorithms","authors":"Rakhmat Purnomo, Tri Dharma Putra","doi":"10.33395/sinkron.v8i2.12525","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.12525","url":null,"abstract":"Abstract: Operating system is a software acting as an interface between computer hardware and user. Operating system is known as a resource  manager. The main responsibility of operating system is to handle resources of computer system. Scheduling is a key concept in computer multitasking and multiprocessing operating system design by switching the CPU among process. Shortest job first (SJF) and round robin are two wellknown algorithms in CPU processing. For shortest job first, this algorithm can be preemptived. In preemptive shortest job first, when a new process coming in, the process can be interupted. Where with round robin algorithm there will be time slices, context switching, or also called quantum, between process. In this journal we wil discuss comparative study between preemptive shortest job first and round robin algorithms. Three comparative studies will be discussed to understand these two algorithms more deeply. For all comparative study, the average waiting time and average turnaround time is more for round robin algorithm. In the first comparative study, we get average waiting time 52% more. For average turnaround time, 30% more. In second comparative analysis, we get 52 % average waiting time more and we get 35 % average turnaround time more. For third comparative analysis, average waiting time we get 50% more and for average turnaround time, we get 28% more. Thus it is concluded in our comparative study for these kind of data the preemptive shortest job first is more efficient then the round robin algorithm. \u0000  \u0000Keywords: comparative study, premptive shortest job first algorithm, round robin algorithm, turn around time, average waiting time, time slice","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"54 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Facial Expression Recognition with Image Augmentation Techniques: VGG19 Approach on FERC Dataset 利用图像增强技术优化面部表情识别:FERC 数据集上的 VGG19 方法
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13507
Fahma Inti Ilmawati, Kusrini Kusrini, Tonny Hidayat
In the field of facial expression recognition (FER), the availability of balanced and representative datasets is key to success in training accurate models. However, Facial Expression Recognition Challenge (FERC) datasets often face the challenge of class imbalance, where some facial expressions have a much smaller number of samples compared to others. This issue can result in biased and unsatisfactory model performance, especially in recognizing less common facial expressions. Data augmentation techniques are becoming an important strategy as they can expand the dataset by creating new variations of existing samples, thus increasing the variety and diversity of the data. Data augmentation can be used to increase the number of samples for less common facial expression classes, thus improving the model's ability to recognize and understand diverse facial expressions. The augmentation results are then combined with balancing techniques such as SMOTE coupled with undersampling to improve model performance. In this study, VGG19 is used to support better model performance. This will provide valuable guidelines for optimizing more advanced CNN models in the future and may encourage further research in creating more innovative augmentation techniques.
在面部表情识别(FER)领域,平衡且具有代表性的数据集是成功训练精确模型的关键。然而,面部表情识别挑战赛(FERC)数据集经常面临类不平衡的挑战,即某些面部表情的样本数量比其他面部表情少得多。这个问题会导致模型性能出现偏差和不尽如人意,尤其是在识别不太常见的面部表情时。数据增强技术正成为一种重要的策略,因为它可以通过创建现有样本的新变体来扩展数据集,从而增加数据的多样性。数据扩增可用于增加不常见面部表情类别的样本数量,从而提高模型识别和理解不同面部表情的能力。然后将扩增结果与 SMOTE 等平衡技术和欠采样相结合,以提高模型性能。在这项研究中,VGG19 被用来支持更好的模型性能。这将为今后优化更先进的 CNN 模型提供有价值的指导,并可能鼓励进一步研究更创新的增强技术。
{"title":"Optimizing Facial Expression Recognition with Image Augmentation Techniques: VGG19 Approach on FERC Dataset","authors":"Fahma Inti Ilmawati, Kusrini Kusrini, Tonny Hidayat","doi":"10.33395/sinkron.v8i2.13507","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13507","url":null,"abstract":"In the field of facial expression recognition (FER), the availability of balanced and representative datasets is key to success in training accurate models. However, Facial Expression Recognition Challenge (FERC) datasets often face the challenge of class imbalance, where some facial expressions have a much smaller number of samples compared to others. This issue can result in biased and unsatisfactory model performance, especially in recognizing less common facial expressions. Data augmentation techniques are becoming an important strategy as they can expand the dataset by creating new variations of existing samples, thus increasing the variety and diversity of the data. Data augmentation can be used to increase the number of samples for less common facial expression classes, thus improving the model's ability to recognize and understand diverse facial expressions. The augmentation results are then combined with balancing techniques such as SMOTE coupled with undersampling to improve model performance. In this study, VGG19 is used to support better model performance. This will provide valuable guidelines for optimizing more advanced CNN models in the future and may encourage further research in creating more innovative augmentation techniques.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"21 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140361102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of Cloud Run and Cloud Storage as REST API Service on OutfitHub Application 将云运行和云存储作为 REST API 服务在 OutfitHub 应用程序上实施
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v9i2.13387
Derryl Reflando Tarigan, Muhammad Irsan, Muhammad Faris Fathoni
The development of Cloud Computing technology has progressed rapidly in recent years especially with the emergence of Google Cloud Services (GCR) which has become one of the leading cloud service providers. This research focuses on the OutfitHub application, which plays a role in assisting users in determining clothing styles using a personalized recommendation system. In developing this application, the research seeks to implement cloud computing services to improve application performance. The purpose of this research is to implement Cloud Computing, especially Cloud run and Cloud Storage services as Rest API in the Outfithub application. By implementing these two services, it is expected that there is no need to pay attention to the problem of Storage needs that are growing at any time and no need to worry about the need for server configuration because both of these things will be fully done by GCR. Implementing Cloud Computing will provide a variety of benefits in addition to those previously mentioned, such as: being able to access data from anywhere and at any time. This implementation is expected to be able to run OutfitHub applications in a Cloud environment in a serverless computing manner without requiring the design of unnecessary virtual machines.
近年来,云计算技术发展迅速,尤其是谷歌云服务(GCR)的出现,使其成为领先的云服务提供商之一。本研究的重点是 OutfitHub 应用程序,该应用程序的作用是利用个性化推荐系统帮助用户确定服装款式。在开发该应用程序的过程中,研究人员试图通过实施云计算服务来提高应用程序的性能。本研究的目的是在 Outfithub 应用程序中以 Rest API 的形式实施云计算,特别是云运行和云存储服务。通过实施这两项服务,预计无需关注随时增长的存储需求问题,也无需担心服务器配置需求,因为这两件事都将由 GCR 完全完成。除了前面提到的好处之外,实施云计算还将带来各种好处,例如:可以随时随地访问数据。预计该实施方案将能够以无服务器计算的方式在云环境中运行 OutfitHub 应用程序,而无需设计不必要的虚拟机。
{"title":"Implementation of Cloud Run and Cloud Storage as REST API Service on OutfitHub Application","authors":"Derryl Reflando Tarigan, Muhammad Irsan, Muhammad Faris Fathoni","doi":"10.33395/sinkron.v9i2.13387","DOIUrl":"https://doi.org/10.33395/sinkron.v9i2.13387","url":null,"abstract":"The development of Cloud Computing technology has progressed rapidly in recent years especially with the emergence of Google Cloud Services (GCR) which has become one of the leading cloud service providers. This research focuses on the OutfitHub application, which plays a role in assisting users in determining clothing styles using a personalized recommendation system. In developing this application, the research seeks to implement cloud computing services to improve application performance. The purpose of this research is to implement Cloud Computing, especially Cloud run and Cloud Storage services as Rest API in the Outfithub application. By implementing these two services, it is expected that there is no need to pay attention to the problem of Storage needs that are growing at any time and no need to worry about the need for server configuration because both of these things will be fully done by GCR. Implementing Cloud Computing will provide a variety of benefits in addition to those previously mentioned, such as: being able to access data from anywhere and at any time. This implementation is expected to be able to run OutfitHub applications in a Cloud environment in a serverless computing manner without requiring the design of unnecessary virtual machines.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"19 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140361375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of The Use of Nguyen Widrow Algorithm in Backpropagation in Kidney Disease 在肾病反向传播中使用阮维德罗算法的分析
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13608
Romanus Damanik, Muhammad Zarlis, Zakarias Situmorang
Fast and accurate diagnosis is very important for kidney disease. This research conducts and analyzes by using Nguyen Widrow Algorithm in Back Propagation method in artificial neural network for kidney disease diagnosis with the aim to improve the accuracy in predicting and time efficiency in diagnosing. The Nguyen Widrow algorithm is very capable of accelerating convergence and stabilizing the learning process in artificial neural networks, which is also expected to present a meaningful contribution to the handling of health data. This study uses MATLAB as a platform for algorithm implementation and a dataset of medical records of kidney disease patients collected from a hospital that specializes in treating kidney disease patients. The data pre-processing and artificial neural network modeling stages use the Nguyen Widrow algorithm, while the model training process uses the Back Propagation method. The results showed that the Nguyen Widrow algorithm was able to improve the accuracy of predicting someone suffering from kidney disease compared to using only the Back Propagation method. Analysis of the performance of the model shows a significant improvement in stability and convergence speed during the learning process. This indicates that data processing and medical decision making becomes more efficient. On the other hand, this research also studied the challenges and limitations that will be faced in terms of implementation of the Nguyen Widrow algorithm. Also the sensitivity of the initialization parameters, the need for the quality of the dataset to be used in training the model.This research reveals the ability of the Nguyen Widrow algorithm to improve the performance of artificial neural networks in diagnosing kidney disease. By implementing this algorithm in MATLAB, the results show that the use of the latest data processing technology and analysis tools can provide significant improvements in accuracy and efficiency in the medical field. In addition, this research is expected to provide a new direction in the development of machine learning algorithms for applications in the healthcare field, especially for diagnosing kidney disease. By further utilizing this technology, it contributes significantly to improving the quality of healthcare and treatment outcomes for patients suffering from kidney disease.
快速准确的诊断对肾脏疾病非常重要。本研究通过在人工神经网络中使用反向传播方法中的 Nguyen Widrow 算法对肾病诊断进行了研究和分析,旨在提高预测的准确性和诊断的时间效率。Nguyen Widrow 算法在加速收敛和稳定人工神经网络的学习过程方面具有很强的能力,有望为健康数据的处理做出有意义的贡献。本研究使用 MATLAB 作为算法实现平台,并使用从一家专门治疗肾病患者的医院收集的肾病患者病历数据集。数据预处理和人工神经网络建模阶段使用 Nguyen Widrow 算法,模型训练过程使用反向传播方法。结果表明,与仅使用反向传播方法相比,阮维德罗算法能够提高预测肾病患者的准确性。对模型性能的分析表明,在学习过程中,稳定性和收敛速度都有显著提高。这表明数据处理和医疗决策变得更加高效。另一方面,这项研究还研究了阮维德罗算法在实施过程中将面临的挑战和限制。本研究揭示了 Nguyen Widrow 算法提高人工神经网络诊断肾病性能的能力。通过在 MATLAB 中实施该算法,研究结果表明,使用最新的数据处理技术和分析工具可以显著提高医疗领域的准确性和效率。此外,这项研究有望为医疗保健领域应用机器学习算法的开发提供一个新方向,尤其是在诊断肾脏疾病方面。通过进一步利用这项技术,它将为提高医疗质量和肾病患者的治疗效果做出重大贡献。
{"title":"Analysis of The Use of Nguyen Widrow Algorithm in Backpropagation in Kidney Disease","authors":"Romanus Damanik, Muhammad Zarlis, Zakarias Situmorang","doi":"10.33395/sinkron.v8i2.13608","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13608","url":null,"abstract":"Fast and accurate diagnosis is very important for kidney disease. This research conducts and analyzes by using Nguyen Widrow Algorithm in Back Propagation method in artificial neural network for kidney disease diagnosis with the aim to improve the accuracy in predicting and time efficiency in diagnosing. The Nguyen Widrow algorithm is very capable of accelerating convergence and stabilizing the learning process in artificial neural networks, which is also expected to present a meaningful contribution to the handling of health data. This study uses MATLAB as a platform for algorithm implementation and a dataset of medical records of kidney disease patients collected from a hospital that specializes in treating kidney disease patients. The data pre-processing and artificial neural network modeling stages use the Nguyen Widrow algorithm, while the model training process uses the Back Propagation method. The results showed that the Nguyen Widrow algorithm was able to improve the accuracy of predicting someone suffering from kidney disease compared to using only the Back Propagation method. Analysis of the performance of the model shows a significant improvement in stability and convergence speed during the learning process. This indicates that data processing and medical decision making becomes more efficient. On the other hand, this research also studied the challenges and limitations that will be faced in terms of implementation of the Nguyen Widrow algorithm. Also the sensitivity of the initialization parameters, the need for the quality of the dataset to be used in training the model.This research reveals the ability of the Nguyen Widrow algorithm to improve the performance of artificial neural networks in diagnosing kidney disease. By implementing this algorithm in MATLAB, the results show that the use of the latest data processing technology and analysis tools can provide significant improvements in accuracy and efficiency in the medical field. In addition, this research is expected to provide a new direction in the development of machine learning algorithms for applications in the healthcare field, especially for diagnosing kidney disease. By further utilizing this technology, it contributes significantly to improving the quality of healthcare and treatment outcomes for patients suffering from kidney disease.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"4 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140359375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diabetes Disease Detection Classification Using Light Gradient Boosting (LightGBM) With Hyperparameter Tuning 利用光梯度推移(LightGBM)和超参数调整进行糖尿病疾病检测分类
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13530
Elisa Ramadanti, Devi Aprilya Dinathi, Christianskaditya Christianskaditya, Didih Rizki Chandranegara
Diabetes is a condition caused by an imbalance between the need for insulin in the body and insufficient insulin production by the pancreas, causing an increase in blood sugar concentration. This study aims to find the best classification performance on diabetes datasets with the LightGBM method. The dataset used consists of 768 rows and 9 columns, with target values of 0 and 1. In this study, resampling is applied to overcome data imbalance using SMOTE and perform hyperparameter optimization. Model evaluation is performed using confusion matrix and various metrics such as accuracy, recall, precision and f1-score. This research conducted several tests. In hyperparameter optimization tests using GridSearchCV and RandomSearchCV, the LightGBM method showed good performance. In tests that apply data resampling, the LightGBM method achieves the highest accuracy, namely the LightGBM method with GridSearchCV optimization with the highest accuracy reaching 84%, while LightGBM with RandomSearchCV optimization reaches 82% accuracy.
糖尿病是由于体内对胰岛素的需求与胰腺分泌的胰岛素不足之间的不平衡引起的血糖浓度升高。本研究旨在利用 LightGBM 方法找出糖尿病数据集的最佳分类性能。所使用的数据集包括 768 行和 9 列,目标值为 0 和 1。在本研究中,使用 SMOTE 进行重采样以克服数据不平衡,并进行超参数优化。使用混淆矩阵和各种指标(如准确率、召回率、精确度和 f1 分数)对模型进行评估。这项研究进行了多项测试。在使用 GridSearchCV 和 RandomSearchCV 进行的超参数优化测试中,LightGBM 方法表现出色。在应用数据重采样的测试中,LightGBM 方法获得了最高的准确率,即采用 GridSearchCV 优化的 LightGBM 方法的准确率最高,达到 84%,而采用 RandomSearchCV 优化的 LightGBM 方法的准确率为 82%。
{"title":"Diabetes Disease Detection Classification Using Light Gradient Boosting (LightGBM) With Hyperparameter Tuning","authors":"Elisa Ramadanti, Devi Aprilya Dinathi, Christianskaditya Christianskaditya, Didih Rizki Chandranegara","doi":"10.33395/sinkron.v8i2.13530","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13530","url":null,"abstract":"Diabetes is a condition caused by an imbalance between the need for insulin in the body and insufficient insulin production by the pancreas, causing an increase in blood sugar concentration. This study aims to find the best classification performance on diabetes datasets with the LightGBM method. The dataset used consists of 768 rows and 9 columns, with target values of 0 and 1. In this study, resampling is applied to overcome data imbalance using SMOTE and perform hyperparameter optimization. Model evaluation is performed using confusion matrix and various metrics such as accuracy, recall, precision and f1-score. This research conducted several tests. In hyperparameter optimization tests using GridSearchCV and RandomSearchCV, the LightGBM method showed good performance. In tests that apply data resampling, the LightGBM method achieves the highest accuracy, namely the LightGBM method with GridSearchCV optimization with the highest accuracy reaching 84%, while LightGBM with RandomSearchCV optimization reaches 82% accuracy.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"102 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility Analysis of Bengkel Koding Website Using Black Box Testing and Boundary Value Analysis 利用黑盒测试和边界值分析对 Bengkel Koding 网站进行可行性分析
Pub Date : 2024-03-31 DOI: 10.33395/sinkron.v8i2.13589
Clara Edrea Evelyna Sony Putri, Ajib Susanto
In an era of rapid technological development, application development has become common, especially in coding. However, most websites do not give appropriate assignments and instructors to help improve coding skills. Because of this, the Bengkel Koding of Dian Nuswantoro University Semarang is a solution to improving the quality of coding learning. This research aims to identify the shortcomings in the website and ensure that the website functions as expected by the users. By testing the application like this, researchers can know which problems can affect the user experience. This research uses one of the frequently used tests, namely Black Box testing. The objective is to verify that the system's functions, inputs, and outputs align with the specified requirements. In addition to the Black Box method, this research uses a technique called Boundary Value Analysis. This technique is to identify errors or bugs that can affect the user experience by focusing on the input value boundary. The test results will use a quality ratio that will determine whether or not the system is suitable for use by users. Through 30 test cases, most website functions have been tested properly, with the feasibility level reaching 83.333%. Nonetheless, five errors or bugs were still found, emphasizing the need for further improvement. The results of this study provide valuable insights into improving the quality and convenience of users in accessing the Bengkel Koding website.
在科技飞速发展的时代,应用程序开发已成为一种普遍现象,尤其是在编码方面。然而,大多数网站并没有提供适当的作业和指导教师来帮助提高编码技能。因此,三宝垄 Dian Nuswantoro 大学的 Bengkel Koding 是提高编码学习质量的一个解决方案。本研究旨在找出网站的不足之处,并确保网站功能符合用户的预期。通过这样的应用测试,研究人员可以知道哪些问题会影响用户体验。本研究采用了一种常用的测试方法,即黑盒测试。其目的是验证系统的功能、输入和输出是否符合指定要求。除了黑盒测试法,本研究还使用了一种名为边界值分析的技术。该技术通过关注输入值边界来识别可能影响用户体验的错误或 bug。测试结果将使用质量比率来确定系统是否适合用户使用。在 30 个测试案例中,大部分网站功能都得到了正确测试,可行性水平达到 83.333%。尽管如此,仍发现了 5 个错误或漏洞,强调了进一步改进的必要性。本研究的结果为提高用户访问 Bengkel Koding 网站的质量和便利性提供了宝贵的见解。
{"title":"Feasibility Analysis of Bengkel Koding Website Using Black Box Testing and Boundary Value Analysis","authors":"Clara Edrea Evelyna Sony Putri, Ajib Susanto","doi":"10.33395/sinkron.v8i2.13589","DOIUrl":"https://doi.org/10.33395/sinkron.v8i2.13589","url":null,"abstract":"In an era of rapid technological development, application development has become common, especially in coding. However, most websites do not give appropriate assignments and instructors to help improve coding skills. Because of this, the Bengkel Koding of Dian Nuswantoro University Semarang is a solution to improving the quality of coding learning. This research aims to identify the shortcomings in the website and ensure that the website functions as expected by the users. By testing the application like this, researchers can know which problems can affect the user experience. This research uses one of the frequently used tests, namely Black Box testing. The objective is to verify that the system's functions, inputs, and outputs align with the specified requirements. In addition to the Black Box method, this research uses a technique called Boundary Value Analysis. This technique is to identify errors or bugs that can affect the user experience by focusing on the input value boundary. The test results will use a quality ratio that will determine whether or not the system is suitable for use by users. Through 30 test cases, most website functions have been tested properly, with the feasibility level reaching 83.333%. Nonetheless, five errors or bugs were still found, emphasizing the need for further improvement. The results of this study provide valuable insights into improving the quality and convenience of users in accessing the Bengkel Koding website.","PeriodicalId":34046,"journal":{"name":"Sinkron","volume":"55 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140360734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Sinkron
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1