Farming is crucial for various aspects of daily life, including food, the economy, environment, culture, and community. It provides employment opportunities, generates income, and increases the export of agricultural products, particularly in rural areas. Sustainable farming practices promote soil health, biodiversity, and ecosystem services, and are essential in many parts of the world. Farming is deeply rooted in cultures and traditions and is a way of life for many communities, passed down from generation to generation. Without farming, we would not have the abundance and variety of food that we enjoy today. Advancements in technology, such as artificial intelligence, machine learning, and the Internet of Things, have greatly impacted agriculture by producing vast amounts of digital data on crops, soil, and weather conditions. However, managing and analyzing this data can be challenging for farmers, especially those in developing nations. To address this issue, affordable digital farming solutions, including open-source software platforms, sensor networks, and mobile apps, are being developed to help farmers optimize their resources, increase yields, and profits. Digital twin technology can play a crucial role in digital farming by providing farmers with a virtual replica of their physical farm. It is a digital depiction of a real-world asset, such a farm or a particular crop field, that gathers information from sensors, weather stations, and satellite pictures. This technology has arisen that has been hailed as revolutionary in a number of fields, including manufacturing machines, construction, agriculture, healthcare, and the automotive and aerospace industries. However, the technology is still in its early stages in agriculture, and it can be challenging to handle the interactions between different farming-related digital twin components. Additionally, digital twinning can require significant investment in technology and infrastructure, which may be a barrier for small-scale farmers.
{"title":"A Review on Digital Twins Technology: A New Frontier in Agriculture","authors":"Nabarun Dawn, Souptik Ghosh, Tania Ghosh, Sagnik Guha, Subhajit Sarkar, Aloke Saha, Pronoy Mukherjee, Tanmay Sanyal","doi":"10.47852/bonviewaia3202919","DOIUrl":"https://doi.org/10.47852/bonviewaia3202919","url":null,"abstract":"Farming is crucial for various aspects of daily life, including food, the economy, environment, culture, and community. It provides employment opportunities, generates income, and increases the export of agricultural products, particularly in rural areas. Sustainable farming practices promote soil health, biodiversity, and ecosystem services, and are essential in many parts of the world. Farming is deeply rooted in cultures and traditions and is a way of life for many communities, passed down from generation to generation. Without farming, we would not have the abundance and variety of food that we enjoy today. Advancements in technology, such as artificial intelligence, machine learning, and the Internet of Things, have greatly impacted agriculture by producing vast amounts of digital data on crops, soil, and weather conditions. However, managing and analyzing this data can be challenging for farmers, especially those in developing nations. To address this issue, affordable digital farming solutions, including open-source software platforms, sensor networks, and mobile apps, are being developed to help farmers optimize their resources, increase yields, and profits. Digital twin technology can play a crucial role in digital farming by providing farmers with a virtual replica of their physical farm. It is a digital depiction of a real-world asset, such a farm or a particular crop field, that gathers information from sensors, weather stations, and satellite pictures. This technology has arisen that has been hailed as revolutionary in a number of fields, including manufacturing machines, construction, agriculture, healthcare, and the automotive and aerospace industries. However, the technology is still in its early stages in agriculture, and it can be challenging to handle the interactions between different farming-related digital twin components. Additionally, digital twinning can require significant investment in technology and infrastructure, which may be a barrier for small-scale farmers.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135701651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202729
Isaac Kofi Nti, S. Boateng, Juanita Ahia Quarcoo, P. Nimbe
Several topics, problems, and established legal principles are already being challenged using artificial intelligence (AI) in numerous applications. The powers of AI have been snowballing to the point where it is evident that AI applications in law and various economic sectors aid in promoting a good society. However, questions such as who the prolific authors, papers, and institutions are, as well as what the specific and thematic areas of application are, remain unanswered. In the current study, 177 papers on artificial intelligence applications in law published between 1960 and April 29, 2022, were pulled from Scopus using keywords and analysed scientometrically. We identified the strongest citation bursts, the most prolific authors, countries/regions, and primary research interests, as well as their evolution trends and collaborative relationships over the past 62 years. The analysis also identified co-authorship networks, collaboration networks of countries/regions, co-occurrence networks of keywords, and timeline visualization of keywords. This study concludes that systematic study and enough attention are still lacking in artificial intelligence application in law (AIL). The methodical design of the required platforms, as well as the collecting, cleansing, and storage of data; the collaboration of many stakeholders, researchers, and nations/regions; are all problems that AIL must still overcome. Both researchers and industry professionals who are devoted to AIL will find value in the findings.
{"title":"Artificial Intelligence Application in Law: A Scientometric Review","authors":"Isaac Kofi Nti, S. Boateng, Juanita Ahia Quarcoo, P. Nimbe","doi":"10.47852/bonviewaia3202729","DOIUrl":"https://doi.org/10.47852/bonviewaia3202729","url":null,"abstract":"Several topics, problems, and established legal principles are already being challenged using artificial intelligence (AI) in numerous applications. The powers of AI have been snowballing to the point where it is evident that AI applications in law and various economic sectors aid in promoting a good society. However, questions such as who the prolific authors, papers, and institutions are, as well as what the specific and thematic areas of application are, remain unanswered. In the current study, 177 papers on artificial intelligence applications in law published between 1960 and April 29, 2022, were pulled from Scopus using keywords and analysed scientometrically. We identified the strongest citation bursts, the most prolific authors, countries/regions, and primary research interests, as well as their evolution trends and collaborative relationships over the past 62 years. The analysis also identified co-authorship networks, collaboration networks of countries/regions, co-occurrence networks of keywords, and timeline visualization of keywords. This study concludes that systematic study and enough attention are still lacking in artificial intelligence application in law (AIL). The methodical design of the required platforms, as well as the collecting, cleansing, and storage of data; the collaboration of many stakeholders, researchers, and nations/regions; are all problems that AIL must still overcome. Both researchers and industry professionals who are devoted to AIL will find value in the findings.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"72 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86908521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202624
Chandravva Hebbi, Mamatha H. R.
In this work, an attempt is made to build a dataset for handwritten Kannada characters and also to recognize the isolated Kannada vowels, consonants, modifiers, and ottaksharas. The dataset is collected from 500 writers of varying ages, gender, qualification, and profession. This dataset will be used to recognize the handwritten kagunita’s, ottaksharas, and other base characters, where the existing works have addressed very less on the recognition of kagunita’s and ottaksharas. There are no datasets for the same. Hence, a dataset for handwritten 85 characters is built using an unsupervised machine learning technique i.e K-means hierarchical clustering with Run Length Code (RLC) features. An accuracy of 80% was achieved with the unsupervised method. The dataset consists of 130,981 samples for 85 classes, these classes are further divided into upper, lower, and middle zones based on the position of the character in the dialect. After the dataset was built SVM model with HOG features was used for recognition and an accuracy of 99.0%, 88.6%, and 92.2% was obtained for the Upper, Middle, and Lower zones respectively to increase the recognition rate, the CNN model is fine-tuned with raw input, and an accuracy of 100%, 96.15%, and 95.38% was obtained for the Upper, Middle, and Lower zones respectively. With the ResNet18 model, an accuracy of 99.88%, 98.92, and 97.55% was obtained for each of the zones respectively. The dataset will be made available online for the researchers to carry out their research on handwritten characters, kagunitas, and word recognition with segmentation.
{"title":"Comprehensive Dataset Building and Recognition of Isolated Handwritten Kannada Characters Using Machine Learning Models","authors":"Chandravva Hebbi, Mamatha H. R.","doi":"10.47852/bonviewaia3202624","DOIUrl":"https://doi.org/10.47852/bonviewaia3202624","url":null,"abstract":"In this work, an attempt is made to build a dataset for handwritten Kannada characters and also to recognize the isolated Kannada vowels, consonants, modifiers, and ottaksharas. The dataset is collected from 500 writers of varying ages, gender, qualification, and profession. This dataset will be used to recognize the handwritten kagunita’s, ottaksharas, and other base characters, where the existing works have addressed very less on the recognition of kagunita’s and ottaksharas. There are no datasets for the same. Hence, a dataset for handwritten 85 characters is built using an unsupervised machine learning technique i.e K-means hierarchical clustering with Run Length Code (RLC) features. An accuracy of 80% was achieved with the unsupervised method. The dataset consists of 130,981 samples for 85 classes, these classes are further divided into upper, lower, and middle zones based on the position of the character in the dialect. After the dataset was built SVM model with HOG features was used for recognition and an accuracy of 99.0%, 88.6%, and 92.2% was obtained for the Upper, Middle, and Lower zones respectively to increase the recognition rate, the CNN model is fine-tuned with raw input, and an accuracy of 100%, 96.15%, and 95.38% was obtained for the Upper, Middle, and Lower zones respectively. With the ResNet18 model, an accuracy of 99.88%, 98.92, and 97.55% was obtained for each of the zones respectively. The dataset will be made available online for the researchers to carry out their research on handwritten characters, kagunitas, and word recognition with segmentation.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135585315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202456
Shital Nivrutti Katkade, Vandana C. Bagal, Ramesh R. Manza, Pravin L Yannawar
Visually disabled people's day-night lives are delicate they are facing numerous problems when traveling from one position to another, and they are more likely to be involved in an accident as a result of their lack of vision. The motive of this review paper is to explore colorful ways used by other experimenters worldwide, for persons with vision loss to fulfill their full eventuality. the system alerts visually impaired individuals about their surroundings by employing some sort of audio device, extracting information about the objects that are present in their surroundings using the devices now in use as visual substitutes. Utmost results handed by experimenters bear fresh tackle, which adds to the Burden for visually disabled people in the world. A system is required that will help them in their day-night lives and become part of their life and will not feel like a burden. The dataset was used by experimenters for object detection, COCO (handed by Microsoft), Pascal VOC, ImageNet, etc. and this dataset is publicly available on the internet.
{"title":"Advances in Real-Time Object Detection and Information Retrieval: A Review","authors":"Shital Nivrutti Katkade, Vandana C. Bagal, Ramesh R. Manza, Pravin L Yannawar","doi":"10.47852/bonviewaia3202456","DOIUrl":"https://doi.org/10.47852/bonviewaia3202456","url":null,"abstract":"Visually disabled people's day-night lives are delicate they are facing numerous problems when traveling from one position to another, and they are more likely to be involved in an accident as a result of their lack of vision. The motive of this review paper is to explore colorful ways used by other experimenters worldwide, for persons with vision loss to fulfill their full eventuality. the system alerts visually impaired individuals about their surroundings by employing some sort of audio device, extracting information about the objects that are present in their surroundings using the devices now in use as visual substitutes. Utmost results handed by experimenters bear fresh tackle, which adds to the Burden for visually disabled people in the world. A system is required that will help them in their day-night lives and become part of their life and will not feel like a burden. The dataset was used by experimenters for object detection, COCO (handed by Microsoft), Pascal VOC, ImageNet, etc. and this dataset is publicly available on the internet.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135585529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia32021197
Lama Alkhaled, Ayush Roy, Shivakumara Palaiahnakote
Digital water meter digit recognition from images of water meter readings is a challenging research problem. One key reason is that this might be a lack of publicly available datasets to develop such methods. Another reason is the digits suffer from poor quality. In this work, we develop a dataset, called MR-AMR-v1, which comprises 10 different digits (0 to 9) that are commonly found in electrical and electronic water meter readings. Additionally, we generate a synthetic benchmarking dataset to make the proposed model robust. We propose a weighted probability averaging ensemble-based water meter digit recognition method applied to snapshots of the Fourier transformed convolution block attention module (FCBAM) aided combined ResNet50-InceptionV3 architecture. This benchmarking method achieves an accuracy of 88% on test set images (benchmarking data). Our model also achieves a high accuracy of 97.73% on the MNIST dataset. We benchmark the result on this dataset using the proposed method after performing an exhaustive set of experiments.
{"title":"An Attention based Fusion of ResNet50 and InceptionV3 Model for Water Meter Digit Recognition","authors":"Lama Alkhaled, Ayush Roy, Shivakumara Palaiahnakote","doi":"10.47852/bonviewaia32021197","DOIUrl":"https://doi.org/10.47852/bonviewaia32021197","url":null,"abstract":"Digital water meter digit recognition from images of water meter readings is a challenging research problem. One key reason is that this might be a lack of publicly available datasets to develop such methods. Another reason is the digits suffer from poor quality. In this work, we develop a dataset, called MR-AMR-v1, which comprises 10 different digits (0 to 9) that are commonly found in electrical and electronic water meter readings. Additionally, we generate a synthetic benchmarking dataset to make the proposed model robust. We propose a weighted probability averaging ensemble-based water meter digit recognition method applied to snapshots of the Fourier transformed convolution block attention module (FCBAM) aided combined ResNet50-InceptionV3 architecture. This benchmarking method achieves an accuracy of 88% on test set images (benchmarking data). Our model also achieves a high accuracy of 97.73% on the MNIST dataset. We benchmark the result on this dataset using the proposed method after performing an exhaustive set of experiments.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135158158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202293
Padmaprabha Preethi, Hosahalli Ramappa Mamatha
Indian history is derived from ancient writings on the inscriptions, palm leaves, copper plates, coins, and many more mediums. Epigraphers read these inscriptions and produce meaningful interpretations. Automating the process of reading is the interest of our study, and in this paper, segmentation to detect text on digitized inscriptional images is dealt in detail. Character segmentation from epigraphical images helps in optical character recognizer in training and recognition of old regional scripts. Epigraphical images are drawn from estampages containing scripts from various periods starting from Brahmi in the 3rd century BC to the medieval period of the 15th century AD. The scripts or characters present in digitized epigraphical images are illegible and have complex noisy background textures. To achieve script/text segmentation, region-based convolutional neural network (CNN) is employed to detect characters in the images. Proposed method uses selective search to identify text regions and forwards them to trained CNN models for drawing feature vectors. These feature vectors are fed to support vector machine classifiers for classification and recognize text by drawing a bounding box based on confidence score. Alexnet, VGG16, Resnet50, and InceptionV3 are used as CNN models for experimentation, and InceptionV3 performed well with good results. A total of 197 images are used for experimentation, out of which 70 samples are of printed denoised epigraphical images, 40 denoised estampage images, and 87 noisy estampage images. The segmentation result of 74.79% for printed denoised epigraphical images, 71.53 % for denoised estampage epigraphical images, and 18.11% for noisy estampage images are recorded by InceptionV3. The segmented characters are used for epigraphical applications like period/era prediction and recognition of characters. FAST and FASTER region-based design approach was also tested and illustrated in this paper.
{"title":"Region-Based Convolutional Neural Network for Segmenting Text in Epigraphical Images","authors":"Padmaprabha Preethi, Hosahalli Ramappa Mamatha","doi":"10.47852/bonviewaia2202293","DOIUrl":"https://doi.org/10.47852/bonviewaia2202293","url":null,"abstract":"Indian history is derived from ancient writings on the inscriptions, palm leaves, copper plates, coins, and many more mediums. Epigraphers read these inscriptions and produce meaningful interpretations. Automating the process of reading is the interest of our study, and in this paper, segmentation to detect text on digitized inscriptional images is dealt in detail. Character segmentation from epigraphical images helps in optical character recognizer in training and recognition of old regional scripts. Epigraphical images are drawn from estampages containing scripts from various periods starting from Brahmi in the 3rd century BC to the medieval period of the 15th century AD. The scripts or characters present in digitized epigraphical images are illegible and have complex noisy background textures. To achieve script/text segmentation, region-based convolutional neural network (CNN) is employed to detect characters in the images. Proposed method uses selective search to identify text regions and forwards them to trained CNN models for drawing feature vectors. These feature vectors are fed to support vector machine classifiers for classification and recognize text by drawing a bounding box based on confidence score. Alexnet, VGG16, Resnet50, and InceptionV3 are used as CNN models for experimentation, and InceptionV3 performed well with good results. A total of 197 images are used for experimentation, out of which 70 samples are of printed denoised epigraphical images, 40 denoised estampage images, and 87 noisy estampage images. The segmentation result of 74.79% for printed denoised epigraphical images, 71.53 % for denoised estampage epigraphical images, and 18.11% for noisy estampage images are recorded by InceptionV3. The segmented characters are used for epigraphical applications like period/era prediction and recognition of characters. FAST and FASTER region-based design approach was also tested and illustrated in this paper.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136367932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Consecutive Clustering is one type of learning method that is built on neural network. It is frequently used in different domains including biomedical research. It is very useful for consecutive clustering (adjacent clustering). Adjacent clustering is highly used where there are various specific locations or addresses denoting each individual features in the data that need to be grouped consecutively. One of the useful consecutive clustering in the field of biomedical research is differentially methylated region (DMR) finding analysis on various CpG sites (features). Method: So far, many researches have been carried out on deep learn- ing and consecutive clustering in biomedical domain. But for epigenetics study, very limited survey papers have been published till now where con- secutive clustering has been demonstrated together. Hence, in this study, we contributed a comprehensive survey on several fundamental categories of consecutive clustering, e.g., Convolutional Neural Network(CNN) Auto- Encoder (AE), Restricted Boltzmann Machines (RBM) and Deep Belief Net- work (DBN), Recurrent Neural Network (RNN), Deep Stacking Networks (DSN), Long Short Term Memory (LSTM) / Gated Recurrent Unit (GRU) Network etc., along with their applications, advantages and disadvantages. Different forms of consecutive clustering algorithms are covered in the second section (viz., supervised and unsupervised DMR finding methods) used for DNA methylation data have been described here along with their advantages, shortcomings and overall performance estimation (power, time). Conclusion: Our survey paper provides a latest research work that have been done for consecutive clustering algorithms for healthcare purposes. All the usages, benefits and shortcomings along with their performance evaluation of each algorithm has been elaborated in our manuscript by which new biomedical researchers can understand and use those tools and algorithms for their research prospective.
{"title":"Recent Landscape of Deep Learning Intervention and Consecutive Clustering on Biomedical Diagnosis","authors":"Ayan Mukherji, Arindam Mondal, Rajib Banerjee, Saurav Mallik","doi":"10.47852/bonviewaia2202480","DOIUrl":"https://doi.org/10.47852/bonviewaia2202480","url":null,"abstract":"Background: Consecutive Clustering is one type of learning method that is built on neural network. It is frequently used in different domains including biomedical research. It is very useful for consecutive clustering (adjacent clustering). Adjacent clustering is highly used where there are various specific locations or addresses denoting each individual features in the data that need to be grouped consecutively. One of the useful consecutive clustering in the field of biomedical research is differentially methylated region (DMR) finding analysis on various CpG sites (features). Method: So far, many researches have been carried out on deep learn- ing and consecutive clustering in biomedical domain. But for epigenetics study, very limited survey papers have been published till now where con- secutive clustering has been demonstrated together. Hence, in this study, we contributed a comprehensive survey on several fundamental categories of consecutive clustering, e.g., Convolutional Neural Network(CNN) Auto- Encoder (AE), Restricted Boltzmann Machines (RBM) and Deep Belief Net- work (DBN), Recurrent Neural Network (RNN), Deep Stacking Networks (DSN), Long Short Term Memory (LSTM) / Gated Recurrent Unit (GRU) Network etc., along with their applications, advantages and disadvantages. Different forms of consecutive clustering algorithms are covered in the second section (viz., supervised and unsupervised DMR finding methods) used for DNA methylation data have been described here along with their advantages, shortcomings and overall performance estimation (power, time). Conclusion: Our survey paper provides a latest research work that have been done for consecutive clustering algorithms for healthcare purposes. All the usages, benefits and shortcomings along with their performance evaluation of each algorithm has been elaborated in our manuscript by which new biomedical researchers can understand and use those tools and algorithms for their research prospective.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202749
M. Koit
Different kinds of negotiations and presented arguments are considered in the paper. Discussions in the Parliament of Estonia as well as negotiation in telemarketing calls, travel and everyday conversations are studied. In the Parliament, negotiation involves many participants while the other conversations take place between two participants. In the analysed texts, argument components (premises and claims), argument structures (basic, linked, etc.), and relations (support, attack, and rebuttal) are annotated manually. For annotating dialogue acts, a customized typology and custom-made software is used. This preliminary study aims to find cues for recognizing arguments in Estonian texts automatically. It turns out that some dialogue acts and language features contribute to the recognition of arguments and inter-argument relations.
{"title":"How to Recognize Arguments? A Study of Human Negotiations","authors":"M. Koit","doi":"10.47852/bonviewaia3202749","DOIUrl":"https://doi.org/10.47852/bonviewaia3202749","url":null,"abstract":"Different kinds of negotiations and presented arguments are considered in the paper. Discussions in the Parliament of Estonia as well as negotiation in telemarketing calls, travel and everyday conversations are studied. In the Parliament, negotiation involves many participants while the other conversations take place between two participants. In the analysed texts, argument components (premises and claims), argument structures (basic, linked, etc.), and relations (support, attack, and rebuttal) are annotated manually. For annotating dialogue acts, a customized typology and custom-made software is used. This preliminary study aims to find cues for recognizing arguments in Estonian texts automatically. It turns out that some dialogue acts and language features contribute to the recognition of arguments and inter-argument relations.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"130 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81724474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia3202593
Dipali B. Jadhav, Gaju S. Chavan, V. C. Bagal, R. Manza
Biometrics character is the science and innovation of examining organic data of human body for developing frameworks security by giving precise and dependable examples to individual verification and ID and its answers are for the most part utilized in Line, ATM machine, Cell phone, legislatures, enterprises, and so on. Single traits of biological source in biometric system is called unimodal biometric. The unimodal biometric framework is great however they frequently experience the ill effects of certain issues when they face with uproarious information like confined levels of opportunity, intra-class varieties, parody assaults, and non-all-inclusiveness. A few of these issues can be tackled by utilizing multimodal biometric frameworks that consolidate at least two biometric modalities. We have referred papers related multimodal biometrics face, iris, fingerprint, palmprint, hand geometry, ear, voice and signature.This article, we covered different approaches of face and palmprint for human authentication.
{"title":"Review on Multimodal Biometric Recognition System Using Machine Learning","authors":"Dipali B. Jadhav, Gaju S. Chavan, V. C. Bagal, R. Manza","doi":"10.47852/bonviewaia3202593","DOIUrl":"https://doi.org/10.47852/bonviewaia3202593","url":null,"abstract":"Biometrics character is the science and innovation of examining organic data of human body for developing frameworks security by giving precise and dependable examples to individual verification and ID and its answers are for the most part utilized in Line, ATM machine, Cell phone, legislatures, enterprises, and so on. Single traits of biological source in biometric system is called unimodal biometric. The unimodal biometric framework is great however they frequently experience the ill effects of certain issues when they face with uproarious information like confined levels of opportunity, intra-class varieties, parody assaults, and non-all-inclusiveness. A few of these issues can be tackled by utilizing multimodal biometric frameworks that consolidate at least two biometric modalities. We have referred papers related multimodal biometrics face, iris, fingerprint, palmprint, hand geometry, ear, voice and signature.This article, we covered different approaches of face and palmprint for human authentication.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88584794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.47852/bonviewaia2202354
Sadia Nur Amin, Palaiahnakote Shivakumara, Tang Xue Jun, Kai Yang Chong, Dillon Leong Lon Zan, Ramachandra Rahavendra
The food industry is becoming competitive on a daily basis and introducing newer cuisines to the menu in an attempt to rise up the ladder. But they still are not being able to improve their performances because customers often only have the waiters to describe the dishes to them and thus, most of the time results in not fulfilling their expectations. Thus, to allow the customers to visualize their orders more informatively, this paper presents an android application that overlays digital three-dimensional (3D) food models onto a quick responsible (QR) code image marker on a food menu using augmented reality (AR) technology through the camera of the system. Moreover, the price and a detailed list of the ingredients used to prepare the dish, along with the nutritional and calorie content, will also appear beside the 3D food model to keep the customers completely informed of what they will be ordering. This work focused on designing the 3D food models in the Blender 3D tool, which were then imported into the Unity 3D application with the Vuforia software development kit preinstalled, and Figma has been utilized for designing the user interface of the system. The study’s outcome is an AR application that provides the customer with a more engaging approach to visualize the dishes in 3D form, which can improve customer sales and restaurant loyalty.
{"title":"An Augmented Reality-Based Approach for Designing Interactive Food Menu of Restaurant Using Android","authors":"Sadia Nur Amin, Palaiahnakote Shivakumara, Tang Xue Jun, Kai Yang Chong, Dillon Leong Lon Zan, Ramachandra Rahavendra","doi":"10.47852/bonviewaia2202354","DOIUrl":"https://doi.org/10.47852/bonviewaia2202354","url":null,"abstract":"The food industry is becoming competitive on a daily basis and introducing newer cuisines to the menu in an attempt to rise up the ladder. But they still are not being able to improve their performances because customers often only have the waiters to describe the dishes to them and thus, most of the time results in not fulfilling their expectations. Thus, to allow the customers to visualize their orders more informatively, this paper presents an android application that overlays digital three-dimensional (3D) food models onto a quick responsible (QR) code image marker on a food menu using augmented reality (AR) technology through the camera of the system. Moreover, the price and a detailed list of the ingredients used to prepare the dish, along with the nutritional and calorie content, will also appear beside the 3D food model to keep the customers completely informed of what they will be ordering. This work focused on designing the 3D food models in the Blender 3D tool, which were then imported into the Unity 3D application with the Vuforia software development kit preinstalled, and Figma has been utilized for designing the user interface of the system. The study’s outcome is an AR application that provides the customer with a more engaging approach to visualize the dishes in 3D form, which can improve customer sales and restaurant loyalty.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134955516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}