Pub Date : 2023-11-14DOI: 10.35472/indojam.v3i2.1576
Dwi Mahrani
One form of social security program is a pension funding program. A pension plan is a program designed to provide benefits to employees when they retire. Based on the type of membership, the pension program is divided into 2 types, namely individuals and groups. There are differences in the methods used for the two types of participation, namely the Normal Entry Age (EAN) method for individual pension programs and the Frozen Initial Liability (FIL) method for group pension programs. Each period, both individual and group pension program participants are required to pay normal contributions. Things that influence the amount of normal contributions for each worker are the age at which they enter work, the participant's initial salary, and the participant's chance of survival/death. In addition, the calculation of normal contributions also depends on the interest rate used. In this study, the interest rate used is the interest rate of the Cox-Ingersoll-Ross (CIR) model. The normal contribution amount for EAN is constant for each period, while the normal contribution amount for FIL changes when there are participants who retire in the pension program group. The normal FIL contribution tends to be on average compared to the EAN normal contribution for each participant.
{"title":"Analisis Besar Iuran Normal Metode Frozen Initial Liability dan Metode Entry Age Normal Menggunakan Tingkat Suku Bunga Cox-Ingersoll-Ross (CIR)","authors":"Dwi Mahrani","doi":"10.35472/indojam.v3i2.1576","DOIUrl":"https://doi.org/10.35472/indojam.v3i2.1576","url":null,"abstract":"One form of social security program is a pension funding program. A pension plan is a program designed to provide benefits to employees when they retire. Based on the type of membership, the pension program is divided into 2 types, namely individuals and groups. There are differences in the methods used for the two types of participation, namely the Normal Entry Age (EAN) method for individual pension programs and the Frozen Initial Liability (FIL) method for group pension programs. Each period, both individual and group pension program participants are required to pay normal contributions. Things that influence the amount of normal contributions for each worker are the age at which they enter work, the participant's initial salary, and the participant's chance of survival/death. In addition, the calculation of normal contributions also depends on the interest rate used. In this study, the interest rate used is the interest rate of the Cox-Ingersoll-Ross (CIR) model. The normal contribution amount for EAN is constant for each period, while the normal contribution amount for FIL changes when there are participants who retire in the pension program group. The normal FIL contribution tends to be on average compared to the EAN normal contribution for each participant.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"16 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139276583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.35472/indojam.v3i2.1587
Teguh Pasmahendri
Abstract: Data security is very necessary for companies, institutions, organizations and individuals who have confidential information. The use of data security is intended so that information cannot be stolen by others. The rise of personal data leakage cases in Indonesia has made people worried about the security of their personal data such as identity cards (KTP), emails, and so on. One way to secure data in the form of a digital image is to encrypt it. One of the encryption algorithms is Logistic Map. Logistic map is one of the chaos algorithms that is often used in image cryptography because this algorithm is able to generate a complex array of random numbers with a simple recursive polynomial equation. In this research, the author encrypts a digital ID card image with a size of 3×3 pixels with an RGB image. Based on this explanation, it can be concluded that the encryption and decryption process using the Logistic Map Algorithm using parameter values in the interval range [3.57;4] and initial values in the interval range [0;1] can successfully encrypt a digital image. In this research, changes were made to the initial value (x_0) and different parameter values (r). The results obtained based on experiments conducted by the author using a digital image measuring 1572 × 966 pixels are with the value of x_0 = 0.8672 and r = 3.7541 experiencing a very varied color intensity distribution compared to the histogram produced with the initial value and other parameter values in the research conducted. Keywords: Citra Digital, Data Security, Logistic Map Abstrak: Keamanan data sangat diperlukan bagi perusahaan, institusi, organisasi maupun perseorangan yang memiliki informasi rahasia. Penggunaan keamanan data ditujukan agar informasi tidak dapat dicuri oleh orang lain. Maraknya kasus kebocoran data pribadi di Indonesia membuat khawatir dengan keamanan data pribadinya seperti Kartu Tanda Penduduk (KTP), email, dan lain sebagainya. Adapun salah satu cara untuk mengamankan suatu data berbentuk citra digital adalah dengan mengenkripsikannya. Salah satu algoritma enkripsinya adalah Logistic Map. Logistic map adalah salah satu algoritma chaos yang sering digunakan dalam kriptografi citra karena algoritma ini mampu menghasilkan deretan bilangan acak yang kompleks dengan persamaan polinomial rekursif yang sederhana. Dalam penelitian kali ini penulis mengenkripsikan citra digital KTP dengan ukuran pixel dengan citra RGB. Berdasarkan penjelasan tersebut dapat diambil kesimpulan yaitu proses enkripsi dan dekripsi menggunakan Algoritma Logistic Map dengan menggunakan nilai parameter direntang interval dan nilai awal direntang interval berhasil mengenkripsikan suatu citra digital. Pada penelitian kali ini di lakukan perubahan terhadap nilai awal dan nilai parameter yang berbeda-beda. Hasil yang didapatkan berdasarkan percobaan yang telah dilakukan penulis dengan menggunakan citra digital berukuran pixel adalah dengan nilai dan mengalami penyebaran intensitas
{"title":"Kriptografi Dan Kriptanalisis Citra Digital Menggunakan Algoritma Logistic Map","authors":"Teguh Pasmahendri","doi":"10.35472/indojam.v3i2.1587","DOIUrl":"https://doi.org/10.35472/indojam.v3i2.1587","url":null,"abstract":"Abstract: Data security is very necessary for companies, institutions, organizations and individuals who have confidential information. The use of data security is intended so that information cannot be stolen by others. The rise of personal data leakage cases in Indonesia has made people worried about the security of their personal data such as identity cards (KTP), emails, and so on. One way to secure data in the form of a digital image is to encrypt it. One of the encryption algorithms is Logistic Map. Logistic map is one of the chaos algorithms that is often used in image cryptography because this algorithm is able to generate a complex array of random numbers with a simple recursive polynomial equation. In this research, the author encrypts a digital ID card image with a size of 3×3 pixels with an RGB image. Based on this explanation, it can be concluded that the encryption and decryption process using the Logistic Map Algorithm using parameter values in the interval range [3.57;4] and initial values in the interval range [0;1] can successfully encrypt a digital image. In this research, changes were made to the initial value (x_0) and different parameter values (r). The results obtained based on experiments conducted by the author using a digital image measuring 1572 × 966 pixels are with the value of x_0 = 0.8672 and r = 3.7541 experiencing a very varied color intensity distribution compared to the histogram produced with the initial value and other parameter values in the research conducted. Keywords: Citra Digital, Data Security, Logistic Map Abstrak: Keamanan data sangat diperlukan bagi perusahaan, institusi, organisasi maupun perseorangan yang memiliki informasi rahasia. Penggunaan keamanan data ditujukan agar informasi tidak dapat dicuri oleh orang lain. Maraknya kasus kebocoran data pribadi di Indonesia membuat khawatir dengan keamanan data pribadinya seperti Kartu Tanda Penduduk (KTP), email, dan lain sebagainya. Adapun salah satu cara untuk mengamankan suatu data berbentuk citra digital adalah dengan mengenkripsikannya. Salah satu algoritma enkripsinya adalah Logistic Map. Logistic map adalah salah satu algoritma chaos yang sering digunakan dalam kriptografi citra karena algoritma ini mampu menghasilkan deretan bilangan acak yang kompleks dengan persamaan polinomial rekursif yang sederhana. Dalam penelitian kali ini penulis mengenkripsikan citra digital KTP dengan ukuran pixel dengan citra RGB. Berdasarkan penjelasan tersebut dapat diambil kesimpulan yaitu proses enkripsi dan dekripsi menggunakan Algoritma Logistic Map dengan menggunakan nilai parameter direntang interval dan nilai awal direntang interval berhasil mengenkripsikan suatu citra digital. Pada penelitian kali ini di lakukan perubahan terhadap nilai awal dan nilai parameter yang berbeda-beda. Hasil yang didapatkan berdasarkan percobaan yang telah dilakukan penulis dengan menggunakan citra digital berukuran pixel adalah dengan nilai dan mengalami penyebaran intensitas","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"44 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139277976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-13DOI: 10.35472/indojam.v3i2.1577
Mika Alvionita S
Penelitian ini menggunakan algoritma K- Nearest Neighbor (KNN) untuk memprediksi resiko seseorang terkena diabetes. Variabel yang digunakan dalam prediksi adalah pregnancies, glucose, blood pressure, skin thickness, insulin, BMI, diabetes pedigree function, dan age. Analisis menunjukkan bahwa Glucose, BMI, dan Age memiliki korelasi tinggi dengan diagnosis diabetes, menjadikannya indikator yang kuat untuk prediksi. Melalui metode KNN dengan k=1, dilakukan evaluasi model menggunakan Confusion Matrix. Hasil menunjukkan akurasi sebesar 96%, precision sebesar 91,6%, sensitivitas sebesar 88,7%, dan MSE sebesar 0,1376. Temuan ini menunjukkan bahwa KNN dengan k=1 efektif dalam memprediksi diabetes berdasarkan variabel klinis. Informasi ini dapat memberikan manfaat dalam pencegahan dan pengobatan diabetes secara lebih efektif.
{"title":"Prediksi Terkena Diabetes menggunakan Metode K-Nearest Neighbor (KNN) pada Dataset UCI Machine Learning Diabetes","authors":"Mika Alvionita S","doi":"10.35472/indojam.v3i2.1577","DOIUrl":"https://doi.org/10.35472/indojam.v3i2.1577","url":null,"abstract":"Penelitian ini menggunakan algoritma K- Nearest Neighbor (KNN) untuk memprediksi resiko seseorang terkena diabetes. Variabel yang digunakan dalam prediksi adalah pregnancies, glucose, blood pressure, skin thickness, insulin, BMI, diabetes pedigree function, dan age. Analisis menunjukkan bahwa Glucose, BMI, dan Age memiliki korelasi tinggi dengan diagnosis diabetes, menjadikannya indikator yang kuat untuk prediksi. Melalui metode KNN dengan k=1, dilakukan evaluasi model menggunakan Confusion Matrix. Hasil menunjukkan akurasi sebesar 96%, precision sebesar 91,6%, sensitivitas sebesar 88,7%, dan MSE sebesar 0,1376. Temuan ini menunjukkan bahwa KNN dengan k=1 efektif dalam memprediksi diabetes berdasarkan variabel klinis. Informasi ini dapat memberikan manfaat dalam pencegahan dan pengobatan diabetes secara lebih efektif.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139278394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-31DOI: 10.35472/indojam.v3i2.1575
A. Alridha
In this paper, we investigate an optimization methods might be applied for solving curve fitting by making use of a quadratic model. To discover the ideal parameters for the quadratic model, synthetic experimental data is generated, and then two unique optimization approaches, namely differential evolution and the Nelder-Mead algorithm, are applied to the problem in order to find the optimal values for those parameters. The mean squared error as well as the correlation coefficient are both metrics that are incorporated into the objective function. When the results of these algorithms are compared, trade-offs between the rate of convergence and the quality of the fit are revealed. This work sheds light on the necessity of selecting proper optimization algorithms for specific circumstances and provides insights into the balance that must be struck between accurate curve fitting and efficient use of computational resources in the process of curve fitting.
{"title":"Efficiency and Accuracy in Quadratic Curve Fitting: A Comparative Analysis of Optimization Techniques","authors":"A. Alridha","doi":"10.35472/indojam.v3i2.1575","DOIUrl":"https://doi.org/10.35472/indojam.v3i2.1575","url":null,"abstract":"In this paper, we investigate an optimization methods might be applied for solving curve fitting by making use of a quadratic model. To discover the ideal parameters for the quadratic model, synthetic experimental data is generated, and then two unique optimization approaches, namely differential evolution and the Nelder-Mead algorithm, are applied to the problem in order to find the optimal values for those parameters. The mean squared error as well as the correlation coefficient are both metrics that are incorporated into the objective function. When the results of these algorithms are compared, trade-offs between the rate of convergence and the quality of the fit are revealed. This work sheds light on the necessity of selecting proper optimization algorithms for specific circumstances and provides insights into the balance that must be struck between accurate curve fitting and efficient use of computational resources in the process of curve fitting.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139306490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of thrift goods in life has the effect of saving and loving the environment. Many of these objects are difficult to degrade in nature and are nevertheless thought to have economic worth, for example used electronic equipment such as laptops. On the other hand, in today's classroom environment, laptops are quite important. Even the most recent computers with good features are fairly costly, thus used laptops are one answer to this. The seller's selling price usually only takes a few elements into account, therefore the price set does not always match the requirements. The aim of this paper is to apply the zero order Sugeno fuzzy approach to determine the selling price of old laptop. The system is built with characteristics such as laptop age, physical condition, RAM, new purchase price, and used selling price. The simulation findings suggest that fuzzy logic employing the zero-order Sugeno approach can be utilized to determine the selling price of old laptop while accounting for the affecting variables.
{"title":"Determining The Selling Price of Thrift Using The Fuzzy Sugeno Method","authors":"Radhiah Radhiah, Siti Rusdiana, Syaiful Hamdi, Nurmaulidar Nurmaulidar, Intan Syahrini, Mahmudi Mahmudi, Muhammad Ikhwan","doi":"10.35472/indojam.v3i2.1598","DOIUrl":"https://doi.org/10.35472/indojam.v3i2.1598","url":null,"abstract":"The use of thrift goods in life has the effect of saving and loving the environment. Many of these objects are difficult to degrade in nature and are nevertheless thought to have economic worth, for example used electronic equipment such as laptops. On the other hand, in today's classroom environment, laptops are quite important. Even the most recent computers with good features are fairly costly, thus used laptops are one answer to this. The seller's selling price usually only takes a few elements into account, therefore the price set does not always match the requirements. The aim of this paper is to apply the zero order Sugeno fuzzy approach to determine the selling price of old laptop. The system is built with characteristics such as laptop age, physical condition, RAM, new purchase price, and used selling price. The simulation findings suggest that fuzzy logic employing the zero-order Sugeno approach can be utilized to determine the selling price of old laptop while accounting for the affecting variables.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139311737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-30DOI: 10.35472/indojam.v3i1.1288
Triyana Muliawati, D. G. Harbowo, Andre Markus Fernando Lubis, Juan Daniel Turnip, Erina Rosalia Irda, Adelia Azahra, Yanti Marito
Geologically, the fossilization of wood materials into fossils requires appropriate conditions, some of which have been preserved for millions of years. In nature, the organic mass of wood must be quickly replaced by inorganic elements before it decomposes under harsh geological conditions. Anorganic oxides such as silica-oxide, are known to be the main components of most wood specimens (up to 80%). The presence of alkaline oxides such as sodium and potassium oxide seems to play a major role in the presence of dissolved silica during petrification. However, their significance in the petrification phenomenon that occurs in fossilized plant wood is not yet known. Therefore, in this study, cluster analysis was conducted to determine the relationship between the presence of silica and alkaline compounds in petrified wood fossils. The approach used was -means clustering supported by the Elbow Method, which aims to review and order a complex set of data into subsets, thus allowing interpretation. The results showed that the clustering of the fossil wood composition data was optimal at = 3. There is a fair correlation between the presence of silica and alkali oxide compounds (-0.504 to -0.387), as well as with another inorganic compounds (+0.957). The presence of sodium and potassium is strongly correlated during silicification (+0.905). Additionally, the results of data clustering made the wood fossilization process susceptible to describe, especially through data regression. The data visualization provides more facts and proper explanations of the role of alkaline oxides in wood silicification. This study furthers our understanding of wood fossilization, especially the diagenesis of wood chemical composition in geological history.
{"title":"k-Means Clustering to Enhance the Petrified Wood Composition Data Analyses and Its Interpretation","authors":"Triyana Muliawati, D. G. Harbowo, Andre Markus Fernando Lubis, Juan Daniel Turnip, Erina Rosalia Irda, Adelia Azahra, Yanti Marito","doi":"10.35472/indojam.v3i1.1288","DOIUrl":"https://doi.org/10.35472/indojam.v3i1.1288","url":null,"abstract":"Geologically, the fossilization of wood materials into fossils requires appropriate conditions, some of which have been preserved for millions of years. In nature, the organic mass of wood must be quickly replaced by inorganic elements before it decomposes under harsh geological conditions. Anorganic oxides such as silica-oxide, are known to be the main components of most wood specimens (up to 80%). The presence of alkaline oxides such as sodium and potassium oxide seems to play a major role in the presence of dissolved silica during petrification. However, their significance in the petrification phenomenon that occurs in fossilized plant wood is not yet known. Therefore, in this study, cluster analysis was conducted to determine the relationship between the presence of silica and alkaline compounds in petrified wood fossils. The approach used was -means clustering supported by the Elbow Method, which aims to review and order a complex set of data into subsets, thus allowing interpretation. The results showed that the clustering of the fossil wood composition data was optimal at = 3. There is a fair correlation between the presence of silica and alkali oxide compounds (-0.504 to -0.387), as well as with another inorganic compounds (+0.957). The presence of sodium and potassium is strongly correlated during silicification (+0.905). Additionally, the results of data clustering made the wood fossilization process susceptible to describe, especially through data regression. The data visualization provides more facts and proper explanations of the role of alkaline oxides in wood silicification. This study furthers our understanding of wood fossilization, especially the diagenesis of wood chemical composition in geological history.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139353812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-30DOI: 10.35472/indojam.v3i1.1284
F. Lestari
Financial distress is when a company experiences a shortage or insufficient funds to run the company. Prediction of financial distress is needed to prevent bankruptcy. In this study, financial distress predictions were made based on financial ratios obtained from monthly financial reports from a bank convention, after which the proportion that had the most influence on financial distress was determined. The models used in this study are several machine learning models, namely, Logistic Regression, Support Vector Machine, and Random Forest. Based on the analysis results, the best model for predicting financial pressure is the Random Forest Model, with an accuracy of 96.77%. Based on the best model obtained, namely the Random Forest, it can be determined that the ratio that is very influential on financial distress is the ratio of Total Asset Turnover.
{"title":"Prediksi Finansial Distress pada Salah Satu Bank Konvensional Menggunakan Machine Learning","authors":"F. Lestari","doi":"10.35472/indojam.v3i1.1284","DOIUrl":"https://doi.org/10.35472/indojam.v3i1.1284","url":null,"abstract":"Financial distress is when a company experiences a shortage or insufficient funds to run the company. Prediction of financial distress is needed to prevent bankruptcy. In this study, financial distress predictions were made based on financial ratios obtained from monthly financial reports from a bank convention, after which the proportion that had the most influence on financial distress was determined. The models used in this study are several machine learning models, namely, Logistic Regression, Support Vector Machine, and Random Forest. Based on the analysis results, the best model for predicting financial pressure is the Random Forest Model, with an accuracy of 96.77%. Based on the best model obtained, namely the Random Forest, it can be determined that the ratio that is very influential on financial distress is the ratio of Total Asset Turnover.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139353774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-30DOI: 10.35472/indojam.v3i1.1261
Indah Gumala Andirasdini, Ratih Suryaningsih
Premi or known as Contributions in sharia insurance are part of tabbaru fund which is paid by participants. Tabarru’ has puposes to provide the "benevolent funds" with the sincere intention of helping each other among fellow "takaful" participants when one of them suffers a misfortune, such as death. These funds provided by insurance participants will be used to pay claims or insurance benefits by insurance companies. A sharia insurance company is considered doing well financially when tabbaru's funds are well managed to paid the insurance claim. In addition, the company must have sufficient funds to overcome the underwriting deficit in case it happen. This study aims to simulate the adequacy of a company's tabarru’ funds in each period, assuming there is no change in the number of participants during that period. The simulation results can conclude the adequacy of tabarru funds to pay participant claims. Sum of tabarru’ funds are calculated based on the sum of each participant's tabarru’ contribution, which is affected by each sharia-linked unit life insurance product and the participant's policy period. The tabarru’ fund sufficiency simulation is predicted using the IMA(2,1) time series model. The results of this study conclude that the average tabarru funds for each product will be used up in the following year with the criteria of the number of people making claims of not less than five participants. In this study, we found that the company can survive if the number of claims paid from the tabarru' fund with participants tabarru' fund contributions is balanced.
{"title":"Simulasi Pergerakan Dana Tabarru Produk Asuransi Jiwa Unit Link Syariah","authors":"Indah Gumala Andirasdini, Ratih Suryaningsih","doi":"10.35472/indojam.v3i1.1261","DOIUrl":"https://doi.org/10.35472/indojam.v3i1.1261","url":null,"abstract":"Premi or known as Contributions in sharia insurance are part of tabbaru fund which is paid by participants. Tabarru’ has puposes to provide the \"benevolent funds\" with the sincere intention of helping each other among fellow \"takaful\" participants when one of them suffers a misfortune, such as death. These funds provided by insurance participants will be used to pay claims or insurance benefits by insurance companies. A sharia insurance company is considered doing well financially when tabbaru's funds are well managed to paid the insurance claim. In addition, the company must have sufficient funds to overcome the underwriting deficit in case it happen. This study aims to simulate the adequacy of a company's tabarru’ funds in each period, assuming there is no change in the number of participants during that period. The simulation results can conclude the adequacy of tabarru funds to pay participant claims. Sum of tabarru’ funds are calculated based on the sum of each participant's tabarru’ contribution, which is affected by each sharia-linked unit life insurance product and the participant's policy period. The tabarru’ fund sufficiency simulation is predicted using the IMA(2,1) time series model. The results of this study conclude that the average tabarru funds for each product will be used up in the following year with the criteria of the number of people making claims of not less than five participants. In this study, we found that the company can survive if the number of claims paid from the tabarru' fund with participants tabarru' fund contributions is balanced.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139353707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-30DOI: 10.35472/indojam.v3i1.1259
Christyan Tamaro Nadeak
Batchelor-Wilkins Algorithm is a simple and heuristic clustering algorithm used when the number of classes is unknown. In this paper we will use Batchelor-Wilkins algorithm in graph clustering, specifically a Banana Tree Graph B(n,k), a graph obtained by connecting one leaf of each of n copies of a complete bipartite graph K_{1,k-1} to a single root vertex.
Batchelor-Wilkins 算法是一种简单的启发式聚类算法,用于未知类数的情况。在本文中,我们将把 Batchelor-Wilkins 算法用于图聚类,特别是香蕉树图 B(n,k),这是一种通过将完整双熵图 K_{1,k-1} 的 n 份副本中每份副本的一个叶子连接到一个根顶点而得到的图。
{"title":"Penerapan Algoritma Batchelor-Wilkins dalam Pengklasteran Graf","authors":"Christyan Tamaro Nadeak","doi":"10.35472/indojam.v3i1.1259","DOIUrl":"https://doi.org/10.35472/indojam.v3i1.1259","url":null,"abstract":"Batchelor-Wilkins Algorithm is a simple and heuristic clustering algorithm used when the number of classes is unknown. In this paper we will use Batchelor-Wilkins algorithm in graph clustering, specifically a Banana Tree Graph B(n,k), a graph obtained by connecting one leaf of each of n copies of a complete bipartite graph K_{1,k-1} to a single root vertex.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139353697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-28DOI: 10.35472/indojam.v3i1.1274
Mika Sitinjak, Nuramaliyah
The Consumer Price Index (CPI) is an indicator that influences economic growth. CPI is an index that calculates the average of price change of a group of goods and services consumed by households in a certain period of time. CPI is also used to measure inflation in a country. Inflation is described by changes in the CPI from time to time. To anticipate and minimize economic risks caused by inflation, forecasting will be carried out on CPI data. In this study, the CPI will be predicted for the next 6 months using the ARIMA (Autoregressive Integrated Moving Average) model. The result of this research shows that the ARIMA models that can be used to predict CPI are ARIMA (0,2,0), ARIMA (0,2,1), ARIMA (1,2,0), and ARIMA (1,2,1) . The selection of the best model is carried out based on the model that has the smallest AIC value. Based on this, the best model used to predict CPI is the ARIMA model (0,2,1) with an AIC value of 83.21. In addition, this model fulfills diagnostics with white noise residuals, so that forecasting results using this model will be more accurate.
{"title":"Indeks Harga Komsumen (IHK) di Lampung Menggunakan Autoregressive Integrated Moving Average (ARIMA)","authors":"Mika Sitinjak, Nuramaliyah ","doi":"10.35472/indojam.v3i1.1274","DOIUrl":"https://doi.org/10.35472/indojam.v3i1.1274","url":null,"abstract":"The Consumer Price Index (CPI) is an indicator that influences economic growth. CPI is an index that calculates the average of price change of a group of goods and services consumed by households in a certain period of time. CPI is also used to measure inflation in a country. Inflation is described by changes in the CPI from time to time. To anticipate and minimize economic risks caused by inflation, forecasting will be carried out on CPI data. In this study, the CPI will be predicted for the next 6 months using the ARIMA (Autoregressive Integrated Moving Average) model. The result of this research shows that the ARIMA models that can be used to predict CPI are ARIMA (0,2,0), ARIMA (0,2,1), ARIMA (1,2,0), and ARIMA (1,2,1) . The selection of the best model is carried out based on the model that has the smallest AIC value. Based on this, the best model used to predict CPI is the ARIMA model (0,2,1) with an AIC value of 83.21. In addition, this model fulfills diagnostics with white noise residuals, so that forecasting results using this model will be more accurate.","PeriodicalId":293313,"journal":{"name":"Indonesian Journal of Applied Mathematics","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139354010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}