Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130713
Christine Li, Yujia Zhang
This paper proposes a novel solution for tracking the 3D printing process using an application that provides users with real-time updates on its progress [1]. The approach involves taking pictures of the 3D printer during the printing process, which are then analyzed by an AI model trained on thousands of labeled images to detect print failures [2]. The system is implemented using a Raspberry Pi and a camera, which capture images of the 3D printer and upload them to an online database [3]. The proposed application accesses this database to keep the user informed of the printer's current state, ensuring a seamless printing experience.
{"title":"An Intelligent Program to Monitor 3D Printing and Detect Failures using Computer Vision and Machine Learning","authors":"Christine Li, Yujia Zhang","doi":"10.5121/csit.2023.130713","DOIUrl":"https://doi.org/10.5121/csit.2023.130713","url":null,"abstract":"This paper proposes a novel solution for tracking the 3D printing process using an application that provides users with real-time updates on its progress [1]. The approach involves taking pictures of the 3D printer during the printing process, which are then analyzed by an AI model trained on thousands of labeled images to detect print failures [2]. The system is implemented using a Raspberry Pi and a camera, which capture images of the 3D printer and upload them to an online database [3]. The proposed application accesses this database to keep the user informed of the printer's current state, ensuring a seamless printing experience.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85542374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130705
Mengxue Du, Shasha Li, Jie Yu, Jun Ma, Huijun Liu, Miaomiao Li
In this paper, the first-stage retrieval technology is studied from four aspects: the development background, the frontier technology, the current challenges, and the future directions. Our contributionconsists of two main parts. On the one hand, this paper reviewed some retrieval techniques proposed by researchers and drew targeted conclusions through comparative analysis. On the other hand, dif erent research directions are discussed, and the impact of the combination of dif erent techniques on first-stage retrieval is studied and compared. In this way, this survey provides a comprehensive overview of the fieldand will hopefully be used by researchers and practitioners in the first-stage retrieval domain, inspiringnew ideas and further developments.
{"title":"A Review of Research in First-Stage Retrieval","authors":"Mengxue Du, Shasha Li, Jie Yu, Jun Ma, Huijun Liu, Miaomiao Li","doi":"10.5121/csit.2023.130705","DOIUrl":"https://doi.org/10.5121/csit.2023.130705","url":null,"abstract":"In this paper, the first-stage retrieval technology is studied from four aspects: the development background, the frontier technology, the current challenges, and the future directions. Our contributionconsists of two main parts. On the one hand, this paper reviewed some retrieval techniques proposed by researchers and drew targeted conclusions through comparative analysis. On the other hand, dif erent research directions are discussed, and the impact of the combination of dif erent techniques on first-stage retrieval is studied and compared. In this way, this survey provides a comprehensive overview of the fieldand will hopefully be used by researchers and practitioners in the first-stage retrieval domain, inspiringnew ideas and further developments.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86241600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130702
Xi Deng, Shasha Li, Jie Yu, Jun Ma, Bing Ji, Wuhang Lin, Shezheng Song, Zibo Yi
Paper quality evaluation is of great significance as it helps to select high quality papers from the massive amount of academic papers. However, existing models needs improvement on the interaction and aggregation of the hierarchical structure. These models also ignore the guiding role of the title and abstract in the paper text. To address above two issues, we propose a well-designed modular hierarchical model (MHM) for paper quality evaluation. Firstly, the input to our model is most of the paper text, and no additional information is needed. Secondly, we fully exploit the inherent hierarchy of the text with three encoders with attention mechanisms: a word-to-sentence(WtoS) encoder, a sentence-to-paragraph(StoP) encoder, and a paper encoder. Specifically, the WtoS encoder uses the pre-trained language model SciBERT to obtain the sentence representation from the word representation. The StoP encoder lets sentences in the same paragraph interact and aggregates them to get paragraph embeddings based on importance scores. The paper encoder does interaction among different hierarchical structures of three modules of a paper text: the paper title, abstract sentences, and body paragraphs. Then this encoder aggregates new representations generated into a compact vector. In addition, the paper encoder models the guiding role of the title and abstract, respectively, generating another two compact vectors. We concatenate the above three compact vectors and additional four manual features to obtain the paper representation. This representation is then fed into a classifier to obtain the acceptance decision, which is a proxy for papers’ quality. Experimental results on a large-scale dataset built by ourselves show that our model consistently outperforms the previous strong baselines in four evaluation metrics. Quantitative and qualitative analyses further validate the superiority of our model.
{"title":"A Modular Hierarchical Model for Paper Quality Evaluation","authors":"Xi Deng, Shasha Li, Jie Yu, Jun Ma, Bing Ji, Wuhang Lin, Shezheng Song, Zibo Yi","doi":"10.5121/csit.2023.130702","DOIUrl":"https://doi.org/10.5121/csit.2023.130702","url":null,"abstract":"Paper quality evaluation is of great significance as it helps to select high quality papers from the massive amount of academic papers. However, existing models needs improvement on the interaction and aggregation of the hierarchical structure. These models also ignore the guiding role of the title and abstract in the paper text. To address above two issues, we propose a well-designed modular hierarchical model (MHM) for paper quality evaluation. Firstly, the input to our model is most of the paper text, and no additional information is needed. Secondly, we fully exploit the inherent hierarchy of the text with three encoders with attention mechanisms: a word-to-sentence(WtoS) encoder, a sentence-to-paragraph(StoP) encoder, and a paper encoder. Specifically, the WtoS encoder uses the pre-trained language model SciBERT to obtain the sentence representation from the word representation. The StoP encoder lets sentences in the same paragraph interact and aggregates them to get paragraph embeddings based on importance scores. The paper encoder does interaction among different hierarchical structures of three modules of a paper text: the paper title, abstract sentences, and body paragraphs. Then this encoder aggregates new representations generated into a compact vector. In addition, the paper encoder models the guiding role of the title and abstract, respectively, generating another two compact vectors. We concatenate the above three compact vectors and additional four manual features to obtain the paper representation. This representation is then fed into a classifier to obtain the acceptance decision, which is a proxy for papers’ quality. Experimental results on a large-scale dataset built by ourselves show that our model consistently outperforms the previous strong baselines in four evaluation metrics. Quantitative and qualitative analyses further validate the superiority of our model.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80391668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130716
Tongde Zhao, Khoa Tran
The last few decades have seen a remarkable development technological wise in order to provide understanding between people with different language backgrounds. This paper develops a Chrome Extension that scans texts on a website and discovers unfamiliar English words to its users, then translates those words and displays their meanings onto their computer screens. Using Python with Flask framework and Google Firebase as a backend means, this Chrome Extension can be downloaded on any Chrome Browser and provide definitions to difficult words on websites that users want to get translated. The extension is useful to those who are having a hard time understanding difficult English vocabulary. In the experiment, the application has shown to detect 95 to 98% of the difficult words users are struggling with, with the speed of 10 minutes for documents that have more than 2000 words. The results show that the application provides adequate predictions for most users. For this application, I took data reliability, usability, and web-scrapping into consideration. The extension required ample reliable data, user-friendly interface, and a stable Internet browser for usage.
{"title":"A Powerful Chrome Extension: Translation Program using Python, Website Analysis and Google Firebase Services","authors":"Tongde Zhao, Khoa Tran","doi":"10.5121/csit.2023.130716","DOIUrl":"https://doi.org/10.5121/csit.2023.130716","url":null,"abstract":"The last few decades have seen a remarkable development technological wise in order to provide understanding between people with different language backgrounds. This paper develops a Chrome Extension that scans texts on a website and discovers unfamiliar English words to its users, then translates those words and displays their meanings onto their computer screens. Using Python with Flask framework and Google Firebase as a backend means, this Chrome Extension can be downloaded on any Chrome Browser and provide definitions to difficult words on websites that users want to get translated. The extension is useful to those who are having a hard time understanding difficult English vocabulary. In the experiment, the application has shown to detect 95 to 98% of the difficult words users are struggling with, with the speed of 10 minutes for documents that have more than 2000 words. The results show that the application provides adequate predictions for most users. For this application, I took data reliability, usability, and web-scrapping into consideration. The extension required ample reliable data, user-friendly interface, and a stable Internet browser for usage.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91243304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130715
Miguel Angel Gallardo Lemus, J. C. Ramos, R. O. D. Arcega
In the city of Puerto Vallarta, in Mexico, there are crocodiles, distributed in the BahÃa de Banderas area, between the states of Nayarit and Jalisco. Man invaded their habitat, forcing them to look for those spaces they naturally occupied, generating confrontations. This work describes and gives the background of the problem, as well as a possible way to mitigate the situation, the use of IOT technology is proposed to monitor the location of each crocodile and the definition of risk zones to warn of possible dangerous situations. Therefore, the design proposal of a network of wireless sensors for the monitoring of crocodiles that are found in the area of the estuary and the Marina, which is an area surrounded by commercial and residential areas, is carried out. The use of a LORA network is proposed, since the coverage of the Coast, mangrove swamp and Marina is around 8 km. Also, a star topology with a single hub and a gateway node was chosen to send the data to a server. A NoSQL database service such as Firebase and data visualization software using React Native are proposed. The data of interest for the project will only be the latitude and longitude provided by the GPS and that will be decoded through an mkr 1300 development card. With the application of the project it is possible to know the behavior of the reptiles, act quickly in case that the crocodiles are outside their natural area and notify Civil Protection.
{"title":"IOT Network Proposal for the Identification, Monitoring and Location of Crocodiles in the Estuary of Puerto Vallarta, Jalisco, Mexico","authors":"Miguel Angel Gallardo Lemus, J. C. Ramos, R. O. D. Arcega","doi":"10.5121/csit.2023.130715","DOIUrl":"https://doi.org/10.5121/csit.2023.130715","url":null,"abstract":"In the city of Puerto Vallarta, in Mexico, there are crocodiles, distributed in the BahÃa de Banderas area, between the states of Nayarit and Jalisco. Man invaded their habitat, forcing them to look for those spaces they naturally occupied, generating confrontations. This work describes and gives the background of the problem, as well as a possible way to mitigate the situation, the use of IOT technology is proposed to monitor the location of each crocodile and the definition of risk zones to warn of possible dangerous situations. Therefore, the design proposal of a network of wireless sensors for the monitoring of crocodiles that are found in the area of the estuary and the Marina, which is an area surrounded by commercial and residential areas, is carried out. The use of a LORA network is proposed, since the coverage of the Coast, mangrove swamp and Marina is around 8 km. Also, a star topology with a single hub and a gateway node was chosen to send the data to a server. A NoSQL database service such as Firebase and data visualization software using React Native are proposed. The data of interest for the project will only be the latitude and longitude provided by the GPS and that will be decoded through an mkr 1300 development card. With the application of the project it is possible to know the behavior of the reptiles, act quickly in case that the crocodiles are outside their natural area and notify Civil Protection.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75014267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130710
Haokai Zhou, Aleksandr Smolin
Fencers in foil and sabre are often concerned with their referees' preferences when determining priority, which determines who receives the point in a bout [1]. Oftentimes, humans fail to rationally determine priority and apply the rules fairly, leading to inconsistencies in decisions in the same bout. This often causes heated arguments and much discord in fencing competitions [2]. This paper develops software to identify fencers on a video recording, locate key points in their body's structure, record their movements and critical metrics about their performance, and match them with an objective expert knowledge system in order to determine who truly has priority at any given time in the match. We tested out several pose estimation algorithms, such as Yolov5, Yolov7, and MediaPipe in order to determine which one has better accuracy and performance in order to be able to deliver precise, unbiased, and fair refereeing decisions in a short period of time, and then allow the referees to reference the logic behind the decision, as well as see all the data that the decision was based upon in order to validate its veracity [3][4]. We also use caching technology to be able to quickly reload and review previous decisions in case any doubt about the bout's outcome arises post-fact.
{"title":"Developing an Objective Refereeing System for Fencing: Using Pose Estimation Algorithms and Expert Knowledge Systems to Determine Priority and Ensure Fairness","authors":"Haokai Zhou, Aleksandr Smolin","doi":"10.5121/csit.2023.130710","DOIUrl":"https://doi.org/10.5121/csit.2023.130710","url":null,"abstract":"Fencers in foil and sabre are often concerned with their referees' preferences when determining priority, which determines who receives the point in a bout [1]. Oftentimes, humans fail to rationally determine priority and apply the rules fairly, leading to inconsistencies in decisions in the same bout. This often causes heated arguments and much discord in fencing competitions [2]. This paper develops software to identify fencers on a video recording, locate key points in their body's structure, record their movements and critical metrics about their performance, and match them with an objective expert knowledge system in order to determine who truly has priority at any given time in the match. We tested out several pose estimation algorithms, such as Yolov5, Yolov7, and MediaPipe in order to determine which one has better accuracy and performance in order to be able to deliver precise, unbiased, and fair refereeing decisions in a short period of time, and then allow the referees to reference the logic behind the decision, as well as see all the data that the decision was based upon in order to validate its veracity [3][4]. We also use caching technology to be able to quickly reload and review previous decisions in case any doubt about the bout's outcome arises post-fact.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82050872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130708
High-dimensional complex multi-parameter problems are commonly in engineering, while the traditional approximate modeling is limited to low or medium dimensional problems, which cannot overcome the dimensional disaster and greatly reduce the modelling accuracy with the increase of design parameter space. Therefore, this paper combined Kriging with Cut-HDMR, proposed a developed Kriging-HDMR method based on adaptive proportional sampling strategy, and made full use of Kriging's own interpolation prediction advantages and corresponding errors to improve modelling efficiency. Three numerical tests including coupling test, high-dimensional nonlinear test and calculation cost test were used to verify the effectiveness of the algorithm, and compared with the traditional Kriging-HDMR and RBF-HDMR in R2 , REEA and RMEA measuring the approximate accuracy, results show that the improved Kriging-HDMR greatly reduces the sampling cost and avoids falling into local optima. In addition, at the same calculation cost, when the scale coefficient is 1/2, Kriging-HDMR has higher global approximate accuracy and stronger algorithm robustness, while preserving the hierarchical characteristics of coupling between input variables.
{"title":"A Kriging-HDMR Combined with Adaptive Proportional Sampling for Multi-Parameter Approximate Modeling","authors":"","doi":"10.5121/csit.2023.130708","DOIUrl":"https://doi.org/10.5121/csit.2023.130708","url":null,"abstract":"High-dimensional complex multi-parameter problems are commonly in engineering, while the traditional approximate modeling is limited to low or medium dimensional problems, which cannot overcome the dimensional disaster and greatly reduce the modelling accuracy with the increase of design parameter space. Therefore, this paper combined Kriging with Cut-HDMR, proposed a developed Kriging-HDMR method based on adaptive proportional sampling strategy, and made full use of Kriging's own interpolation prediction advantages and corresponding errors to improve modelling efficiency. Three numerical tests including coupling test, high-dimensional nonlinear test and calculation cost test were used to verify the effectiveness of the algorithm, and compared with the traditional Kriging-HDMR and RBF-HDMR in R2 , REEA and RMEA measuring the approximate accuracy, results show that the improved Kriging-HDMR greatly reduces the sampling cost and avoids falling into local optima. In addition, at the same calculation cost, when the scale coefficient is 1/2, Kriging-HDMR has higher global approximate accuracy and stronger algorithm robustness, while preserving the hierarchical characteristics of coupling between input variables.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83748803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130714
Chongda You, Andrew Park
It is often limited to students the opportunities they can get to apply their knowledge and learn new things, as different opportunities have vastly different ways of advertising. Just like the students, many organizations are looking for passionate students to apply their knowledge and energy to benefit society [2]. To solve this problem and ease the difficulties in finding the right opportunities, Maclever aims to be a place where all organizations can post their opportunities for students, and students can use the artificial intelligence-based feature in this application to find opportunities that best fit their skills. Maclever aims to be a simple and effective connection between organizations and students [3]. Leveraging tools such as sentiment analysis and utilizing models based on user behaviors and preferences to better match valuable connections allows us to create a system that gives us a much stronger ability to address the goals Maclevers sets out to solve.
{"title":"A Data-Driven Application for Matching Student Traits with Learning Opportunities using Artificial Intelligence","authors":"Chongda You, Andrew Park","doi":"10.5121/csit.2023.130714","DOIUrl":"https://doi.org/10.5121/csit.2023.130714","url":null,"abstract":"It is often limited to students the opportunities they can get to apply their knowledge and learn new things, as different opportunities have vastly different ways of advertising. Just like the students, many organizations are looking for passionate students to apply their knowledge and energy to benefit society [2]. To solve this problem and ease the difficulties in finding the right opportunities, Maclever aims to be a place where all organizations can post their opportunities for students, and students can use the artificial intelligence-based feature in this application to find opportunities that best fit their skills. Maclever aims to be a simple and effective connection between organizations and students [3]. Leveraging tools such as sentiment analysis and utilizing models based on user behaviors and preferences to better match valuable connections allows us to create a system that gives us a much stronger ability to address the goals Maclevers sets out to solve.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72829322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130703
Daniel Haocheng Xian, Jonathan Sahagun
This paper presents a novel approach to automatically generate 3D character animation from video using artificial intelligence and pose estimation [3]. The proposed system first extracts the pose information fromthe input videousing a pose estimation model [2]. Then, an artificial neural network is trained to generate the corresponding3Dcharacter animation based on the extracted pose information [1]. The generated animation is then refined usingaset of animation filters to enhance the quality of the final output. Our experimental results demonstrate theef ectiveness of the proposed approach in generating realistic and natural-looking 3D character animations fromvideo input [4]. This automated process has the potential to greatly reduce the time and ef ort required for creating3D character animations, making it a valuable tool for the entertainment and gaming industries.
{"title":"An Automated Generation from Video to 3D Character Animation using Artificial Intelligence and Pose Estimate","authors":"Daniel Haocheng Xian, Jonathan Sahagun","doi":"10.5121/csit.2023.130703","DOIUrl":"https://doi.org/10.5121/csit.2023.130703","url":null,"abstract":"This paper presents a novel approach to automatically generate 3D character animation from video using artificial intelligence and pose estimation [3]. The proposed system first extracts the pose information fromthe input videousing a pose estimation model [2]. Then, an artificial neural network is trained to generate the corresponding3Dcharacter animation based on the extracted pose information [1]. The generated animation is then refined usingaset of animation filters to enhance the quality of the final output. Our experimental results demonstrate theef ectiveness of the proposed approach in generating realistic and natural-looking 3D character animations fromvideo input [4]. This automated process has the potential to greatly reduce the time and ef ort required for creating3D character animations, making it a valuable tool for the entertainment and gaming industries.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74984588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-29DOI: 10.5121/csit.2023.130709
Ramyaalakshmi A, P. S.
Image Processing plays an important role in many industries. One of those is 'Dentistry'. Image processing is always of great help to all dentists and clinicians for detecting and diagnosing the disease. To identify appropriate treatment the digital dental image must have better contrast of features. Usually, a dental image process is a tedious process and also a time-dragging process because normally human teeth are uneven and non-structural. Moreover, the X-ray images vary due to intensity, noise and contrast leading to more challenges in employing image processing. A dental X-ray is always pre-processed to give a good contrast image. To evaluate the dental disease, segmentation of image features plays a vital role. This paper reviews the image processing techniques, its features along with their applications and gives the comparative study about how the techniques are used.
{"title":"Role of Image Processing in Dentistry – A Systematic Review","authors":"Ramyaalakshmi A, P. S.","doi":"10.5121/csit.2023.130709","DOIUrl":"https://doi.org/10.5121/csit.2023.130709","url":null,"abstract":"Image Processing plays an important role in many industries. One of those is 'Dentistry'. Image processing is always of great help to all dentists and clinicians for detecting and diagnosing the disease. To identify appropriate treatment the digital dental image must have better contrast of features. Usually, a dental image process is a tedious process and also a time-dragging process because normally human teeth are uneven and non-structural. Moreover, the X-ray images vary due to intensity, noise and contrast leading to more challenges in employing image processing. A dental X-ray is always pre-processed to give a good contrast image. To evaluate the dental disease, segmentation of image features plays a vital role. This paper reviews the image processing techniques, its features along with their applications and gives the comparative study about how the techniques are used.","PeriodicalId":42597,"journal":{"name":"ADCAIJ-Advances in Distributed Computing and Artificial Intelligence Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72393352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}