Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.006
Cai-Juan Soong, Rosshairy Abd Rahman, Razamin Ramli
This paper reveals the high demand of fish products in many countries, which subsequently highlighted the high demand of grouper fish species for human consumption. This high demand leads to the insufficient supply of wild ocean grouper fish in the market, thus justifying the need for farmed or cultured grouper fish. Basically, in grouper fish farming, large amounts of trash fish are needed as the feed for grouper fish, which is the carnivorous type of fish. However, since the cost of trash fish is too high, searching for alternative ingredients for the feed through modelling of feed formulation is an option for reducing or minimizing the farming cost. This led to the search for methods in giving the best combination of feedstuff ingredients with appropriate nutrients in formulating the feed. One prospective method is the Evolutionary Algorithm (EA) that has been applied in solving similar problems of diet formulation for several types of animals including livestock, poultry and shrimp. Hence, in this paper, an improved EA method known as the SR-SD-EA is proposed highlighting three important EA operators, which are initialization, selection and mutation. A semi random initialization operator is introduced to filter some important constraints thus increase the chances of obtaining feasible formulations or solutions. Subsequently, the novel selection operator embeds the concept of standard deviation in the SR-SD-EA as part of the function in minimizing the total cost of the formulated grouper fish feed. Eventually, the enhanced boundary-based mutation is also introduced in the algorithm to ensure the crucial constraint of the ingredients’ total weight must be met. The overall structure of the SR-SD-EA is presented as a framework, where the three methodological contributions are embedded. The preliminary findings of SR-SD-EA show that the obtained cost computed based on the Best-So-Far feed formulation as the solution is comparable, while all the crucial constraints are fulfilled.
{"title":"An Improved Evolutionary Algorithm in Formulating a Diet for Grouper","authors":"Cai-Juan Soong, Rosshairy Abd Rahman, Razamin Ramli","doi":"10.33166/aetic.2023.05.006","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.006","url":null,"abstract":"This paper reveals the high demand of fish products in many countries, which subsequently highlighted the high demand of grouper fish species for human consumption. This high demand leads to the insufficient supply of wild ocean grouper fish in the market, thus justifying the need for farmed or cultured grouper fish. Basically, in grouper fish farming, large amounts of trash fish are needed as the feed for grouper fish, which is the carnivorous type of fish. However, since the cost of trash fish is too high, searching for alternative ingredients for the feed through modelling of feed formulation is an option for reducing or minimizing the farming cost. This led to the search for methods in giving the best combination of feedstuff ingredients with appropriate nutrients in formulating the feed. One prospective method is the Evolutionary Algorithm (EA) that has been applied in solving similar problems of diet formulation for several types of animals including livestock, poultry and shrimp. Hence, in this paper, an improved EA method known as the SR-SD-EA is proposed highlighting three important EA operators, which are initialization, selection and mutation. A semi random initialization operator is introduced to filter some important constraints thus increase the chances of obtaining feasible formulations or solutions. Subsequently, the novel selection operator embeds the concept of standard deviation in the SR-SD-EA as part of the function in minimizing the total cost of the formulated grouper fish feed. Eventually, the enhanced boundary-based mutation is also introduced in the algorithm to ensure the crucial constraint of the ingredients’ total weight must be met. The overall structure of the SR-SD-EA is presented as a framework, where the three methodological contributions are embedded. The preliminary findings of SR-SD-EA show that the obtained cost computed based on the Best-So-Far feed formulation as the solution is comparable, while all the crucial constraints are fulfilled.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.002
Mohd Shamrie Sainin, Minah Sintian, Suraya Alias, Asni Tahir
A computer-aided language translation using a Machine translation (MT) is an application performed by computers (machines) that translates one natural language to another. There are many online language translation tools, but thus far none offers a sequence of text translations for the under-resourced Kadazandusun language. Although there are web-based and mobile applications of Kadazandusun dictionaries available, the systems do not translate more than one word. Hence, this paper aims to present the discussion of the preliminary translation of Malay to Kadazandusun. The basic word-to-word with dictionary alignment translation based on Direct Machine Translation (DMT) is selected to begin the exploration of the translation domain where DMT is one of the earliest translation methods which relies on the word-to-word approach (sequence-to-sequence model). This paper aims to investigate the under-resourced language and the task of translating from the Malay language to the Kadazandusun language or vice versa. This paper presents the application and the process as well as the results of the system according to the basic Kadazandusun word arrangement (Verb-Subject-Object) and its translation quality using the Bilingual Evaluation Understudy (BLEU) score. Several phases are involved during the process, including data collection (word pair translation), preprocessing, text selection, translation procedures, and performance evaluation. The preliminary language translation approach is proven to be capable of producing up to 0.5 BLEU scores which indicate that the translation is readable, however, requires post-editing for better comprehension. The findings are significant for the quality of the under-resourced language translation and as a starting point for other machine translation methodologies such as statistical or deep learning-based translation.
{"title":"The Application of Computer-Aided Under-Resourced Language Translation for Malay into Kadazandusun","authors":"Mohd Shamrie Sainin, Minah Sintian, Suraya Alias, Asni Tahir","doi":"10.33166/aetic.2023.05.002","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.002","url":null,"abstract":"A computer-aided language translation using a Machine translation (MT) is an application performed by computers (machines) that translates one natural language to another. There are many online language translation tools, but thus far none offers a sequence of text translations for the under-resourced Kadazandusun language. Although there are web-based and mobile applications of Kadazandusun dictionaries available, the systems do not translate more than one word. Hence, this paper aims to present the discussion of the preliminary translation of Malay to Kadazandusun. The basic word-to-word with dictionary alignment translation based on Direct Machine Translation (DMT) is selected to begin the exploration of the translation domain where DMT is one of the earliest translation methods which relies on the word-to-word approach (sequence-to-sequence model). This paper aims to investigate the under-resourced language and the task of translating from the Malay language to the Kadazandusun language or vice versa. This paper presents the application and the process as well as the results of the system according to the basic Kadazandusun word arrangement (Verb-Subject-Object) and its translation quality using the Bilingual Evaluation Understudy (BLEU) score. Several phases are involved during the process, including data collection (word pair translation), preprocessing, text selection, translation procedures, and performance evaluation. The preliminary language translation approach is proven to be capable of producing up to 0.5 BLEU scores which indicate that the translation is readable, however, requires post-editing for better comprehension. The findings are significant for the quality of the under-resourced language translation and as a starting point for other machine translation methodologies such as statistical or deep learning-based translation.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135484108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.007
Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda
Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.
{"title":"Chest X-Ray Image Annotation based on Spatial Relationship Feature Extraction","authors":"Mohd Nizam Saad, Mohamad Farhan Mohamad Mohsin, Hamzaini Abdul Hamid, Zurina Muda","doi":"10.33166/aetic.2023.05.007","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.007","url":null,"abstract":"Digital imaging has become an essential element in every medical institution. Therefore, medical image retrieval such as chest X-ray (CXR) must be improved via novel feature extraction and annotation activities before they are stored into image databases. To date, many methods have been introduced to annotate medical images using spatial relationships after these features are extracted. However, the annotation performance for each method is inconsistent and does not show promising achievement to retrieve images. It is noticed that each method is still struggling with at least two big problems. Firstly, the recommended annotation model is weak because the method does not consider the object shape and rely on gross object shape estimation. Secondly, the suggested annotation model can only be functional for simple object placement. As a result, it is difficult to determine the spatial relationship feature after they are extracted to annotate images accurately. Hence, this study aims to propose a new model to annotate nodule location within lung zone for CXR image with extracted spatial relationship feature to improve image retrieval. In order to achieve the aim, a methodology that consists of six phases of CXR image annotation using the extracted spatial relationship features is introduced. This comprehensive methodology covers all cycles for image annotation tasks starting from image pre-processing until determination of spatial relationship features for the lung zone in the CXR. The outcome from applying the methodology also enables us to produce a new semi-automatic annotation system named CHEXRIARS which acts as a tool to annotate the extracted spatial relationship features in CXR images. The CHEXRIARS performance is tested using a retrieval test with two common tests namely the precision and recall (PNR). Apart from CHEXRIARS, three other annotation methods that are object slope, object projection and comparison of region boundaries are also included in the retrieval performance test. Overall, the CHEXRIARS interpolated PNR curve shows the best shape because it is the closest curve approaching the value of 1 on the X-axis and Y-axis. Meanwhile the value of area under curve for CHEXRIARS also revealed that this system attained the highest score at 0.856 as compared to the other three annotation methods. The outcome from the retrieval performance test indicated that the proposed annotation model has produced outstanding outcome and improved the image retrieval.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.003
Radzi Ambar, Safyzan Salim, Mohd Helmy Abd Wahab, Muhammad Mahadi Abdul Jamil, Tan Ching Phing
This article describes the development of a wearable sensor glove for sign language translation and an Android-based application that can display words and produce speech of the translated gestures in real-time. The objective of this project is to enable a conversation between a deaf person and another person who does not know sign language. The glove is composed of five (5) flexible sensors and an inertial sensor. This article also elaborates the development of an Android-based application using the MIT App Inventor software that produces words and speech of the translated gestures in real-time. The sign language gestures were measured by sensors and transmitted to an Arduino Nano microcontroller to be translated into words. Then, the processed data was transmitted to the Android application via Bluetooth. The application displayed the words and produced the sound of the gesture. Furthermore, preliminary experimental results demonstrated that the glove successfully displayed words and produced the sound of thirteen (13) translated sign languages via the developed application. In the future, it is hoped that further upgrades can produce a device to assist a deaf person communicates with normal people without over-reliance on sign language interpreters.
{"title":"Development of a Wearable Sensor Glove for Real-Time Sign Language Translation","authors":"Radzi Ambar, Safyzan Salim, Mohd Helmy Abd Wahab, Muhammad Mahadi Abdul Jamil, Tan Ching Phing","doi":"10.33166/aetic.2023.05.003","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.003","url":null,"abstract":"This article describes the development of a wearable sensor glove for sign language translation and an Android-based application that can display words and produce speech of the translated gestures in real-time. The objective of this project is to enable a conversation between a deaf person and another person who does not know sign language. The glove is composed of five (5) flexible sensors and an inertial sensor. This article also elaborates the development of an Android-based application using the MIT App Inventor software that produces words and speech of the translated gestures in real-time. The sign language gestures were measured by sensors and transmitted to an Arduino Nano microcontroller to be translated into words. Then, the processed data was transmitted to the Android application via Bluetooth. The application displayed the words and produced the sound of the gesture. Furthermore, preliminary experimental results demonstrated that the glove successfully displayed words and produced the sound of thirteen (13) translated sign languages via the developed application. In the future, it is hoped that further upgrades can produce a device to assist a deaf person communicates with normal people without over-reliance on sign language interpreters.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.001
Bryna Ngieng Sing Yii, Nabihah Ahmad, Mohd Helmy Abd Wahab, Warsuzarina Mat Jubadi, Chessda Uttraphan, Syed Zulkarnain Syed Idrus
A home automation system is essential for promoting a safe and comfortable living environment and notable energy conservation for the user. However, the system’s favour had been obstructed by cost, power usage, inadequate security, complexity, and no emergency backup power. Current home automation systems with controllers were limited by their number of ports, fixed architecture, non-durable and non-parallel executions. Keeping this in view, integration of home comfort system, security system, and the automatic load transfer switch features are proposed using the base of Cyclone IV E: EP4CE115F29C7 FPGA Board (DE2-115). The top-level module is developed via Verilog Hardware Descriptive Language (HDL) with the bottom-up technique and used test bench for functional verification via ModelSim-Altera. The PWM method was applied to the lighting system to control the dimming of light through its digital signals via a maximum 500000 counter to improve energy efficiency for the proposed design. In this project, 200Hz pulses are successfully simulated to prevent visible flickering of lights in duty cycle generation. The light intensity of 40% and 100% are verified and successfully generated according to the inputs provided by the status of the LDR sensor and IR sensor. The proposed controller gives correct corresponding outputs to the 13 actuators based on the detected input stimuli. The proposed design utilized a total of 162 (<1%) logic elements, 32 registers, and total pins of 74 (14%). The proposed design successfully integrated the three-sub module and provided control on comfort and security system operations to prevent service failure during power blackout conditions at the top-level and utilized a low ratio of the FPGA.
家庭自动化系统对于促进用户安全舒适的生活环境和显著的节能是必不可少的。然而,该系统受到成本、电力使用、安全性不足、复杂性和无应急备用电源等因素的阻碍。当前带有控制器的家庭自动化系统受到端口数量、固定架构、非持久和非并行执行的限制。基于此,本文提出了以Cyclone IV E: EP4CE115F29C7 FPGA板(DE2-115)为基础,集成家庭舒适系统、安防系统和负荷自动转换开关功能。顶层模块采用Verilog硬件描述语言(HDL)和自底向上技术开发,并使用ModelSim-Altera测试平台进行功能验证。将PWM方法应用于照明系统,通过最大500000计数器通过其数字信号控制光的调光,以提高所提出设计的能源效率。在这个项目中,成功地模拟了200Hz脉冲,以防止占空比产生中可见的闪烁。根据LDR传感器和IR传感器状态提供的输入,验证并成功生成40%和100%的光强。所提出的控制器根据检测到的输入刺激向13个执行器提供正确的相应输出。所提出的设计总共使用了162个(<1%)逻辑元件,32个寄存器和74个总引脚(14%)。该设计成功地集成了三个子模块,并提供了对舒适和安全系统操作的控制,以防止在顶层停电情况下的服务故障,并利用了低比例的FPGA。
{"title":"Integration of Home Automation and Security System Controller with FPGA Implementation","authors":"Bryna Ngieng Sing Yii, Nabihah Ahmad, Mohd Helmy Abd Wahab, Warsuzarina Mat Jubadi, Chessda Uttraphan, Syed Zulkarnain Syed Idrus","doi":"10.33166/aetic.2023.05.001","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.001","url":null,"abstract":"A home automation system is essential for promoting a safe and comfortable living environment and notable energy conservation for the user. However, the system’s favour had been obstructed by cost, power usage, inadequate security, complexity, and no emergency backup power. Current home automation systems with controllers were limited by their number of ports, fixed architecture, non-durable and non-parallel executions. Keeping this in view, integration of home comfort system, security system, and the automatic load transfer switch features are proposed using the base of Cyclone IV E: EP4CE115F29C7 FPGA Board (DE2-115). The top-level module is developed via Verilog Hardware Descriptive Language (HDL) with the bottom-up technique and used test bench for functional verification via ModelSim-Altera. The PWM method was applied to the lighting system to control the dimming of light through its digital signals via a maximum 500000 counter to improve energy efficiency for the proposed design. In this project, 200Hz pulses are successfully simulated to prevent visible flickering of lights in duty cycle generation. The light intensity of 40% and 100% are verified and successfully generated according to the inputs provided by the status of the LDR sensor and IR sensor. The proposed controller gives correct corresponding outputs to the 13 actuators based on the detected input stimuli. The proposed design utilized a total of 162 (<1%) logic elements, 32 registers, and total pins of 74 (14%). The proposed design successfully integrated the three-sub module and provided control on comfort and security system operations to prevent service failure during power blackout conditions at the top-level and utilized a low ratio of the FPGA.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.005
Norlina Mohd Sabri, Siti Fatimah Azzahra Hamrizan
The machine learning based prediction has been applied in various fields to solve different kind of problems. In education, the research on the predictions of examination results is gaining more attentions among the researchers. The adaptation of machine learning for the prediction of students’ achievement enables the educational institutions to identify the high failure rate, learning problems, and reasons for low student performance. This research is proposing the prediction of the Malaysian University English Test (MUET) results based on the K-Nearest Neighbour Algorithm (KNN). KNN is a powerful algorithm that has been applied in various prediction problems. The prediction of the MUET results would help the students and lecturers to be more well prepared and could improve the required English language skills accordingly before the actual examination. The MUET result prediction is based on the student’s English courses grades and there are 516 data of students’ results that have been collected from Universiti Teknologi MARA (UiTM) Dungun campus. The performance measurement that has been used are the mean accuracy, percentage error and mean squared error (MSE). In this research, the KNN prediction model has generated an acceptable performance with 65.29% accuracy. For future work, KNN could be modified or hybridized to further improve its performance. Furthermore, other algorithms could also be explored into this problem to further validate the best predictive model for the prediction of the MUET results.
基于机器学习的预测已被应用于各个领域,以解决不同类型的问题。在教育领域,对考试成绩预测的研究越来越受到研究者的关注。将机器学习应用于学生成绩预测,使教育机构能够识别高不及格率、学习问题和学生成绩低的原因。本研究提出基于k近邻算法(KNN)的马来西亚大学英语测试(MUET)结果预测。KNN是一种功能强大的预测算法,已应用于各种预测问题。对MUET成绩的预测可以帮助学生和老师在实际考试前做更充分的准备,从而提高所需的英语语言技能。MUET成绩预测是基于学生的英语课程成绩,有516个学生成绩数据是从Universiti technologii MARA (UiTM) Dungun校区收集的。所使用的性能度量是平均精度、百分比误差和均方误差(MSE)。在本研究中,KNN预测模型产生了一个可接受的性能,准确率为65.29%。在未来的工作中,可以对KNN进行修饰或杂交,以进一步提高其性能。此外,还可以探索其他算法来解决这个问题,以进一步验证MUET结果预测的最佳预测模型。
{"title":"Prediction of MUET Results Based on K-Nearest Neighbour Algorithm","authors":"Norlina Mohd Sabri, Siti Fatimah Azzahra Hamrizan","doi":"10.33166/aetic.2023.05.005","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.005","url":null,"abstract":"The machine learning based prediction has been applied in various fields to solve different kind of problems. In education, the research on the predictions of examination results is gaining more attentions among the researchers. The adaptation of machine learning for the prediction of students’ achievement enables the educational institutions to identify the high failure rate, learning problems, and reasons for low student performance. This research is proposing the prediction of the Malaysian University English Test (MUET) results based on the K-Nearest Neighbour Algorithm (KNN). KNN is a powerful algorithm that has been applied in various prediction problems. The prediction of the MUET results would help the students and lecturers to be more well prepared and could improve the required English language skills accordingly before the actual examination. The MUET result prediction is based on the student’s English courses grades and there are 516 data of students’ results that have been collected from Universiti Teknologi MARA (UiTM) Dungun campus. The performance measurement that has been used are the mean accuracy, percentage error and mean squared error (MSE). In this research, the KNN prediction model has generated an acceptable performance with 65.29% accuracy. For future work, KNN could be modified or hybridized to further improve its performance. Furthermore, other algorithms could also be explored into this problem to further validate the best predictive model for the prediction of the MUET results.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135484114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-05DOI: 10.33166/aetic.2023.05.004
Shafaf Ibrahim, Muhammad Faris Afiq Fauzi, Nur Nabilah Abu Mangshor, Raihah Aminuddin, Budi Sunarko
Platelet is a blood cell type that is stored and circulated in the human body. It acts as a blood thickening agent and prevents blood from overflowing whenever bleeding occurs. An excessive or inadequate number of platelets could lead to platelet-related diseases. The current practice of platelet counting involves the manual counting process using a haemocytometer, Wright’s Stain which uses the dyes to facilitate the differentiation of blood cell types, and a tally counter. Yet, this process can be time-consuming, demanding, and exhausting for haematologists, and likely to be prone to errors. Thus, this paper presents a study on automated platelet counter and detection using image processing techniques. The K-Means Clustering was employed to count and detect the presence of platelets in microscopic blood smear images. Several processes were performed prior to the K-means clustering, including image enhancement and YCbCr image formatting. Subsequently, image masking, as well as area thresholding were applied to eliminate every unwanted entity and highlight the visibility of the platelets before the number of platelets could be detected and counted. A comparative experiment was designed in which the K-Means Clustering platelet count and detection were compared with the actual number of platelets reported by haematologists. The platelet counts and detection were categorized into three detection categories which are Less Detection (LD), Accurate Detection (AD), and Over Detection (OD). The proposed study was evaluated to 90 testing platelet images. Out of the 90 testing images, 75 platelet images were perfectly counted and detected which returned 91.67% of accuracy. This signifies that the K-Means Clustering algorithm was discovered to be efficient and dependable for automated platelet counter and detection
{"title":"Automated Platelet Counter with Detection Using K-Means Clustering","authors":"Shafaf Ibrahim, Muhammad Faris Afiq Fauzi, Nur Nabilah Abu Mangshor, Raihah Aminuddin, Budi Sunarko","doi":"10.33166/aetic.2023.05.004","DOIUrl":"https://doi.org/10.33166/aetic.2023.05.004","url":null,"abstract":"Platelet is a blood cell type that is stored and circulated in the human body. It acts as a blood thickening agent and prevents blood from overflowing whenever bleeding occurs. An excessive or inadequate number of platelets could lead to platelet-related diseases. The current practice of platelet counting involves the manual counting process using a haemocytometer, Wright’s Stain which uses the dyes to facilitate the differentiation of blood cell types, and a tally counter. Yet, this process can be time-consuming, demanding, and exhausting for haematologists, and likely to be prone to errors. Thus, this paper presents a study on automated platelet counter and detection using image processing techniques. The K-Means Clustering was employed to count and detect the presence of platelets in microscopic blood smear images. Several processes were performed prior to the K-means clustering, including image enhancement and YCbCr image formatting. Subsequently, image masking, as well as area thresholding were applied to eliminate every unwanted entity and highlight the visibility of the platelets before the number of platelets could be detected and counted. A comparative experiment was designed in which the K-Means Clustering platelet count and detection were compared with the actual number of platelets reported by haematologists. The platelet counts and detection were categorized into three detection categories which are Less Detection (LD), Accurate Detection (AD), and Over Detection (OD). The proposed study was evaluated to 90 testing platelet images. Out of the 90 testing images, 75 platelet images were perfectly counted and detected which returned 91.67% of accuracy. This signifies that the K-Means Clustering algorithm was discovered to be efficient and dependable for automated platelet counter and detection","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135483608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social Network Analysis is a discipline that represents social relationships as a network of nodes and edges. The construction of social network with clusters will contribute in sharing the common characteristics or behaviour of a group. Partitioning the graph into modules is said to be a community. Communities are meant to symbolize actual social groups that share common characteristics. Citation network is one of the social networks with directed graphs where one paper will cite another paper and so on. Citation networks will assist the researcher in choosing research directions and evaluating research impacts. By constructing the citation networks with communities will direct the user to identify the similarity of documents which are interrelated to one or more domains. This paper introduces the agglomerative technique algorithms and metrics to a directed graph which determines the most influential nodes and group of similar nodes. The two stages required to construct the communities are how to generate network with communities and how to quantify the network performance. The strength and a quality of a network is quantified in terms of metrics like modularity, normalized mutual information (NMI), betweenness centrality, and F-Measure. The suitable community detection techniques and metrics for a citation graph were introduced in this paper. In the field of community detection, it is common practice to categorize algorithms according to the mathematical techniques they employ, and then compare them on benchmark graphs featuring a particular type of assortative community structure. The algorithms are applied for a sample citation sub data is extracted from DBLP, ACM, MAG and some additional sources which is taken from and consists of 101 nodes (nc) with 621 edges € and formed 64 communities. The key attributes in dataset are id, title, abstract, references SLM uses local optimisation and scalability to improve community detection in complicated networks. Unlike traditional methods, the proposed LS-SLM algorithm is identified that the modularity is increased by 12.65%, NMI increased by 2.31%, betweenness centrality by 3.18% and F-Score by 4.05%. The SLM algorithm outperforms existing methods in finding significant and well-defined communities, making it a promising community detection breakthrough.
{"title":"A Comparative Analysis of Community Detection Agglomerative Technique Algorithms and Metrics on Citation Network","authors":"Sandeep Kumar Rachamadugu, Pushphavathi Thotadara Parameshwarappa","doi":"10.33166/aetic.2023.04.001","DOIUrl":"https://doi.org/10.33166/aetic.2023.04.001","url":null,"abstract":"Social Network Analysis is a discipline that represents social relationships as a network of nodes and edges. The construction of social network with clusters will contribute in sharing the common characteristics or behaviour of a group. Partitioning the graph into modules is said to be a community. Communities are meant to symbolize actual social groups that share common characteristics. Citation network is one of the social networks with directed graphs where one paper will cite another paper and so on. Citation networks will assist the researcher in choosing research directions and evaluating research impacts. By constructing the citation networks with communities will direct the user to identify the similarity of documents which are interrelated to one or more domains. This paper introduces the agglomerative technique algorithms and metrics to a directed graph which determines the most influential nodes and group of similar nodes. The two stages required to construct the communities are how to generate network with communities and how to quantify the network performance. The strength and a quality of a network is quantified in terms of metrics like modularity, normalized mutual information (NMI), betweenness centrality, and F-Measure. The suitable community detection techniques and metrics for a citation graph were introduced in this paper. In the field of community detection, it is common practice to categorize algorithms according to the mathematical techniques they employ, and then compare them on benchmark graphs featuring a particular type of assortative community structure. The algorithms are applied for a sample citation sub data is extracted from DBLP, ACM, MAG and some additional sources which is taken from and consists of 101 nodes (nc) with 621 edges € and formed 64 communities. The key attributes in dataset are id, title, abstract, references SLM uses local optimisation and scalability to improve community detection in complicated networks. Unlike traditional methods, the proposed LS-SLM algorithm is identified that the modularity is increased by 12.65%, NMI increased by 2.31%, betweenness centrality by 3.18% and F-Score by 4.05%. The SLM algorithm outperforms existing methods in finding significant and well-defined communities, making it a promising community detection breakthrough.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.33166/aetic.2023.04.003
Mohammed A. Ahmed, Hanif Baharin, Puteri NE. Nohuddin
Al-Quran is Muslims’ main book of belief and behaviour. The Al-Quran is used as a reference book by millions of Muslims worldwide, and as such, it is useful for Muslims in general and Muslim academics to gain knowledge from it. Many translators have worked on the Quran’s translation into many different languages around the world, including English. Thus, every translator has his/her own perspectives, statements, and opinions when translating verses acquired from the (Tafseer) of the Quran. However, this work aims to cluster these variations among translations of the Tafseer by utilising text clustering. As a part of the text mining approach, text clustering includes clustering documents according to how similar they are. This study adapted the (k-means) clustering technique algorithm (unsupervised learning) to illustrate and discover the relationships between keywords called features or concepts for five different translators on the 286 verses of the Al-Baqarah chapter. The datasets have been preprocessed, and features extracted by applying TF-IDF (Term Frequency-Inverse Document Frequency). The findings show two/three-dimensional clustering plotting for the first two/three most frequent features assigned to seven cluster categories (k=7) for each of five translated Tafseer. The features ‘allah/god’, ‘believ’, and ‘said’ are the three most features shared by the five Tafseer.
{"title":"Text Clustering of Tafseer Translations by Using k-means Algorithm: An Al-Baqarah Chapter View","authors":"Mohammed A. Ahmed, Hanif Baharin, Puteri NE. Nohuddin","doi":"10.33166/aetic.2023.04.003","DOIUrl":"https://doi.org/10.33166/aetic.2023.04.003","url":null,"abstract":"Al-Quran is Muslims’ main book of belief and behaviour. The Al-Quran is used as a reference book by millions of Muslims worldwide, and as such, it is useful for Muslims in general and Muslim academics to gain knowledge from it. Many translators have worked on the Quran’s translation into many different languages around the world, including English. Thus, every translator has his/her own perspectives, statements, and opinions when translating verses acquired from the (Tafseer) of the Quran. However, this work aims to cluster these variations among translations of the Tafseer by utilising text clustering. As a part of the text mining approach, text clustering includes clustering documents according to how similar they are. This study adapted the (k-means) clustering technique algorithm (unsupervised learning) to illustrate and discover the relationships between keywords called features or concepts for five different translators on the 286 verses of the Al-Baqarah chapter. The datasets have been preprocessed, and features extracted by applying TF-IDF (Term Frequency-Inverse Document Frequency). The findings show two/three-dimensional clustering plotting for the first two/three most frequent features assigned to seven cluster categories (k=7) for each of five translated Tafseer. The features ‘allah/god’, ‘believ’, and ‘said’ are the three most features shared by the five Tafseer.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.33166/aetic.2023.04.005
Nur Intan Adhani Binti Muhamad Nazri, Dayang Rohaya Awang Rambli
Remote augmented reality (AR) collaboration promotes an interactive way to present information to the user by conveying a message and instruction to the local and remote participants. Despite its advantages, it is found that due to the limited use of sensory modalities during the remote collaboration process, it can interrupt the transmission of information and interaction cues, by not conveying the right information in remote AR collaboration in which can affect focus, and responses between local and remote users. This study is intended to investigate the behavioural dimension of collaboration (collaborator’s behaviour) and cues involved between local and remote user for physical task. Six participants performed as local participants where they need to build a LEGO, while another 6 participants performed as remote participants that have a complete manual instruction. Participants were given maximum 60 minutes to complete the given task. The results shown that most of the time participants used gesture and speech cues to interact with each other. There are certain signals and keywords established by both participants to have mutual understanding in achieving desired goal. Moreover, it was shown that the task completed by using handsfree produce faster response.
{"title":"Remote Augmented Reality Application: A Study on Cues and Behavioural Dimension","authors":"Nur Intan Adhani Binti Muhamad Nazri, Dayang Rohaya Awang Rambli","doi":"10.33166/aetic.2023.04.005","DOIUrl":"https://doi.org/10.33166/aetic.2023.04.005","url":null,"abstract":"Remote augmented reality (AR) collaboration promotes an interactive way to present information to the user by conveying a message and instruction to the local and remote participants. Despite its advantages, it is found that due to the limited use of sensory modalities during the remote collaboration process, it can interrupt the transmission of information and interaction cues, by not conveying the right information in remote AR collaboration in which can affect focus, and responses between local and remote users. This study is intended to investigate the behavioural dimension of collaboration (collaborator’s behaviour) and cues involved between local and remote user for physical task. Six participants performed as local participants where they need to build a LEGO, while another 6 participants performed as remote participants that have a complete manual instruction. Participants were given maximum 60 minutes to complete the given task. The results shown that most of the time participants used gesture and speech cues to interact with each other. There are certain signals and keywords established by both participants to have mutual understanding in achieving desired goal. Moreover, it was shown that the task completed by using handsfree produce faster response.","PeriodicalId":36440,"journal":{"name":"Annals of Emerging Technologies in Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135369270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}