Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120905
H. Alawwad
This paper aims to fill the gap in major Islamic topics by developing an ontology for the Book of Purification in Islam. Many trusted books start with the Book of Purification as it is the key to prayer (Second Pillar after Shahadah, the profession of faith) and required in Islamic duties like performing Umrah and Hajj. The strategy for developing the ontology included six steps: (1) domain identification, (2) knowledge acquisition, (3) conceptualization, (4) classification, (5) integration and implementation, and (6) ontology generation. Examples of the built tables and classifications are included in this paper. Focus in this paper is given to the design and analysis phases where the technical implementing of the proposed ontology is not within this paper’s objectives. Though, we presented an initial implementation to illustrate the steps of our strategy. We make sure that this ontology or knowledge representation on the Book of Purification in Islam satisfy reusability, where the main attributes, concepts, and their relationships are defined and encoded. This formal encoding will be available for sharing and reusing.
{"title":"A Domain Ontology for Modeling the Book of Purification in Islam","authors":"H. Alawwad","doi":"10.5121/csit.2022.120905","DOIUrl":"https://doi.org/10.5121/csit.2022.120905","url":null,"abstract":"This paper aims to fill the gap in major Islamic topics by developing an ontology for the Book of Purification in Islam. Many trusted books start with the Book of Purification as it is the key to prayer (Second Pillar after Shahadah, the profession of faith) and required in Islamic duties like performing Umrah and Hajj. The strategy for developing the ontology included six steps: (1) domain identification, (2) knowledge acquisition, (3) conceptualization, (4) classification, (5) integration and implementation, and (6) ontology generation. Examples of the built tables and classifications are included in this paper. Focus in this paper is given to the design and analysis phases where the technical implementing of the proposed ontology is not within this paper’s objectives. Though, we presented an initial implementation to illustrate the steps of our strategy. We make sure that this ontology or knowledge representation on the Book of Purification in Islam satisfy reusability, where the main attributes, concepts, and their relationships are defined and encoded. This formal encoding will be available for sharing and reusing.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83776807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120918
Cuba Lajo Rubén Adrián, Loaiza Fernández Manuel Eduardo
Particle packings are used to simulate granular matter, which has various uses in industry. The most outstanding characteristics of these are their density and their construction time, the density refers to the percentage of the space of the object filled with particles, this is also known as compaction or solid fraction. Particle packing seeks to be as dense as possible, work on any object, and have a low build time. Currently there are proposals that have significantly reduced the construction time of a packing and have also managed to increase the density of these, however, they have certain restrictions, such as working on a single type of object and being widely affected by the characteristics of the object. The objective of this work is to present the improvement of a parallel sphere packing for arbitrary domains. The packing to improve was directly affected in time by the number of triangles in the mesh of object. This enhancement focuses on creating a parallel data structure to reduce build time. The proposed method reduces execution time with a high number of triangles, but it takes up a significant amount of memory for the data structure. However, to obtain high densities, that is, densities between 60% and 70%, the sphere packing construction does not overwhelm the memory.
{"title":"Indexed Parallel Sphere Packing for Arbitrary Domains","authors":"Cuba Lajo Rubén Adrián, Loaiza Fernández Manuel Eduardo","doi":"10.5121/csit.2022.120918","DOIUrl":"https://doi.org/10.5121/csit.2022.120918","url":null,"abstract":"Particle packings are used to simulate granular matter, which has various uses in industry. The most outstanding characteristics of these are their density and their construction time, the density refers to the percentage of the space of the object filled with particles, this is also known as compaction or solid fraction. Particle packing seeks to be as dense as possible, work on any object, and have a low build time. Currently there are proposals that have significantly reduced the construction time of a packing and have also managed to increase the density of these, however, they have certain restrictions, such as working on a single type of object and being widely affected by the characteristics of the object. The objective of this work is to present the improvement of a parallel sphere packing for arbitrary domains. The packing to improve was directly affected in time by the number of triangles in the mesh of object. This enhancement focuses on creating a parallel data structure to reduce build time. The proposed method reduces execution time with a high number of triangles, but it takes up a significant amount of memory for the data structure. However, to obtain high densities, that is, densities between 60% and 70%, the sphere packing construction does not overwhelm the memory.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85232231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120914
Haiying Gao, Chao Ma
Non-zero inner product encryption provides fine-grained access control to private data, but the existing non-zero inner product encryption schemes are mainly constructed based on the problem of bilinear groups and lattices without homomorphism. To meet the needs of users to control private data and cloud servers to directly process ciphertexts in a cloud computing environment, this paper designs a non-zero inner product encryption scheme based on the DCR assumption. Specifically, the access control policy is embedded in the ciphertext by a vector y, and the user attribute vector x is embedded in the secret key. If the inner product of the policy vector y of the encryptor and the attribute vector x of the decryptor is not zero, the decryptor can decrypt correctly. This scheme has additive homomorphism in the plaintext-ciphertext space, and it can be proved to be additive homomorphic and adaptively secure.
{"title":"An Adaptively Secure NIPE Scheme based on DCR Assumption","authors":"Haiying Gao, Chao Ma","doi":"10.5121/csit.2022.120914","DOIUrl":"https://doi.org/10.5121/csit.2022.120914","url":null,"abstract":"Non-zero inner product encryption provides fine-grained access control to private data, but the existing non-zero inner product encryption schemes are mainly constructed based on the problem of bilinear groups and lattices without homomorphism. To meet the needs of users to control private data and cloud servers to directly process ciphertexts in a cloud computing environment, this paper designs a non-zero inner product encryption scheme based on the DCR assumption. Specifically, the access control policy is embedded in the ciphertext by a vector y, and the user attribute vector x is embedded in the secret key. If the inner product of the policy vector y of the encryptor and the attribute vector x of the decryptor is not zero, the decryptor can decrypt correctly. This scheme has additive homomorphism in the plaintext-ciphertext space, and it can be proved to be additive homomorphic and adaptively secure.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88034997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120910
M. Shariff, Brian Thoms, Jason T. Isaacs, Vida Vakilian
Classifier algorithms are a subfield of data mining and play an integral role in finding patterns and relationships within large datasets. In recent years, fake news detection has become a popular area of data mining for several important reasons, including its negative impact on decision-making and its virality within social networks. In the past, traditional fake news detection has relied primarily on information context, while modern approaches rely on auxiliary information to classify content. Modelling with machine learning and natural language processing can aid in distinguishing between fake and real news. In this research, we mine data from Reddit, the popular online discussion forum and social news aggregator, and measure machine learning classifiers in order to evaluate each algorithm’s accuracy in detecting fake news using only a minimal subset of data.
{"title":"Approaches in Fake News Detection : An Evaluation of Natural Language Processing and Machine Learning Techniques on the Reddit Social Network","authors":"M. Shariff, Brian Thoms, Jason T. Isaacs, Vida Vakilian","doi":"10.5121/csit.2022.120910","DOIUrl":"https://doi.org/10.5121/csit.2022.120910","url":null,"abstract":"Classifier algorithms are a subfield of data mining and play an integral role in finding patterns and relationships within large datasets. In recent years, fake news detection has become a popular area of data mining for several important reasons, including its negative impact on decision-making and its virality within social networks. In the past, traditional fake news detection has relied primarily on information context, while modern approaches rely on auxiliary information to classify content. Modelling with machine learning and natural language processing can aid in distinguishing between fake and real news. In this research, we mine data from Reddit, the popular online discussion forum and social news aggregator, and measure machine learning classifiers in order to evaluate each algorithm’s accuracy in detecting fake news using only a minimal subset of data.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"94 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76520719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120921
Vilca Vargas Jose R, Quio Añauro Paúl A, Loaiza Fernández Manuel E
Camera calibration is a crucial step to improve the accuracy of the images captured by optical devices. In this paper, we take advantage of projective geometry properties to select frames with quality control points in the data acquisition stage and, further on, perform an accurate camera calibration. The proposed method consists of four steps. Firstly, we select acceptable frames based on the position of the control points, later on we use projective invariants properties to find the optimal control points to perform an initial camera calibration using the camera calibration algorithm implemented in OpenCV. Finally, we perform an iterative process of control point refinement, projective invariants properties check and recalibration; until the results of the calibrations converge to a minimum defined threshold.
{"title":"Monocular Camera Calibration using Projective Invariants","authors":"Vilca Vargas Jose R, Quio Añauro Paúl A, Loaiza Fernández Manuel E","doi":"10.5121/csit.2022.120921","DOIUrl":"https://doi.org/10.5121/csit.2022.120921","url":null,"abstract":"Camera calibration is a crucial step to improve the accuracy of the images captured by optical devices. In this paper, we take advantage of projective geometry properties to select frames with quality control points in the data acquisition stage and, further on, perform an accurate camera calibration. The proposed method consists of four steps. Firstly, we select acceptable frames based on the position of the control points, later on we use projective invariants properties to find the optimal control points to perform an initial camera calibration using the camera calibration algorithm implemented in OpenCV. Finally, we perform an iterative process of control point refinement, projective invariants properties check and recalibration; until the results of the calibrations converge to a minimum defined threshold.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74115981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-28DOI: 10.5121/csit.2022.120922
Mohamed Ali Jemmali, Hussein T. Mouftah
In this paper, the autonomous vehicle presented as a discrete-time Takagi-Sugeno fuzzy (T-S) model. We used the discrete-time T-S model since it is ready for the implementation unlike the continuous T-S fuzzy model. The main goal is to keep the autonomous vehicle in the centreline of the lane regardless the external disturbances. These disturbances are the wind force and the unknown curvature; they are applied to test if the autonomous vehicle moves from the centreline. To ensure that the autonomous vehicle remain on the centreline we propose a discrete-time fuzzy lateral controller called also steering controller.
{"title":"Autonomous Vehicles Lateral Control under Various Scenarios","authors":"Mohamed Ali Jemmali, Hussein T. Mouftah","doi":"10.5121/csit.2022.120922","DOIUrl":"https://doi.org/10.5121/csit.2022.120922","url":null,"abstract":"In this paper, the autonomous vehicle presented as a discrete-time Takagi-Sugeno fuzzy (T-S) model. We used the discrete-time T-S model since it is ready for the implementation unlike the continuous T-S fuzzy model. The main goal is to keep the autonomous vehicle in the centreline of the lane regardless the external disturbances. These disturbances are the wind force and the unknown curvature; they are applied to test if the autonomous vehicle moves from the centreline. To ensure that the autonomous vehicle remain on the centreline we propose a discrete-time fuzzy lateral controller called also steering controller.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83219414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-19DOI: 10.5121/csit.2022.120503
Karim Amzile, Rajaa Amzile
In this study we applied the CART-type Decision Tree (DT-CART) method derived from artificial intelligence technique to the prediction of the solvency of bank customers, for this we used historical data of bank customers. However we have adopted the process of Data Mining techniques, for this purpose we started with a data preprocessing in which we clean the data and we deleted all rows with outliers or missing values as well as rows with empty columns, then we fixed the variable to be explained (dependent or Target) and we also thought to eliminate all explanatory (independent) variables that are not significant using univariate analysis as well as the correlation matrix, then we applied our CART decision tree method using the SPSS tool. After completing our process of building our model (AD-CART), we started the process of evaluating and testing the performance of our model, by which we found that the accuracy and precision of our model is 71%, so we calculated the error ratios, and we found that the error rate equal to 29%, this allowed us to conclude that our model at a fairly good level in terms of precision, predictability and very precisely in predicting the solvency of our banking customers.
{"title":"The Application of Techniques Derived from Artificial Intelligence to the Prediction of the Solvency of Bank Customers: Case of the Application of the Cart Type Decision Tree (DT)","authors":"Karim Amzile, Rajaa Amzile","doi":"10.5121/csit.2022.120503","DOIUrl":"https://doi.org/10.5121/csit.2022.120503","url":null,"abstract":"In this study we applied the CART-type Decision Tree (DT-CART) method derived from artificial intelligence technique to the prediction of the solvency of bank customers, for this we used historical data of bank customers. However we have adopted the process of Data Mining techniques, for this purpose we started with a data preprocessing in which we clean the data and we deleted all rows with outliers or missing values as well as rows with empty columns, then we fixed the variable to be explained (dependent or Target) and we also thought to eliminate all explanatory (independent) variables that are not significant using univariate analysis as well as the correlation matrix, then we applied our CART decision tree method using the SPSS tool. After completing our process of building our model (AD-CART), we started the process of evaluating and testing the performance of our model, by which we found that the accuracy and precision of our model is 71%, so we calculated the error ratios, and we found that the error rate equal to 29%, this allowed us to conclude that our model at a fairly good level in terms of precision, predictability and very precisely in predicting the solvency of our banking customers.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"78 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81619413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-19DOI: 10.5121/csit.2022.120505
Soichiro Kimura, Kensuke Tobitani, N. Nagata
The impressions evoked by textures are called affective textures, and are considered to be important in evaluating and judging the quality of an object. And, technologies for understanding and controlling sensory textures are needed in product design. In this study, we propose a BTF prediction method using DNN as a first attempt to generate textures based on affective texture recognition. The method uses a series of continuously varying viewpoint angles of a texture image as the input signal. This method enables the generation of texture images with continuously changing angles. We tested the validity of the proposed method by using textile, wood and paper. The results show that the proposed method is effective for predicting diffuse reflection optical properties and irregular and regular patterns.
{"title":"BTF Prediction Model using Unsupervised Learning","authors":"Soichiro Kimura, Kensuke Tobitani, N. Nagata","doi":"10.5121/csit.2022.120505","DOIUrl":"https://doi.org/10.5121/csit.2022.120505","url":null,"abstract":"The impressions evoked by textures are called affective textures, and are considered to be important in evaluating and judging the quality of an object. And, technologies for understanding and controlling sensory textures are needed in product design. In this study, we propose a BTF prediction method using DNN as a first attempt to generate textures based on affective texture recognition. The method uses a series of continuously varying viewpoint angles of a texture image as the input signal. This method enables the generation of texture images with continuously changing angles. We tested the validity of the proposed method by using textile, wood and paper. The results show that the proposed method is effective for predicting diffuse reflection optical properties and irregular and regular patterns.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81763729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-19DOI: 10.5121/csit.2022.120502
Jinge Liu, Shuyu Wang
Piano keyboard visualization was very popular right now, but there are very few virtual piano keyboard visualizations right now [1]. I was using unity to show the virtual piano keyboard and then they can play piano pieces by themselves or play a recording online [2]. After that you can listen and see how the recording pieces play it on the visual keyboard to give them a clear idea about how the songs played on a keyboard2 [3]. For those who played by themselves it can let them heard and know also when the visual piano play for them, they can tell if they have offbeat playing or they missing not. Piano4Play is an automated piano transcription and keyboard visualization system using AI and deep learning techniques. The user could upload a recorded piece of music, and our app would visualize the music on a digital piano keyboard. The user could see how the music is played visually in order to help piano beginners to see how the music will be played on piano in order to help them learn more quickly and easier, and advanced players could use the app to see whether they made any mistake when they are playing so they can get some improvement. Our app uses wav and MIDI files, repl, real-time database, google Collab and Unity.
{"title":"Piano4Play: An Automated Piano Transcription and Keyboard Visualization System using AI and Deep Learning Techniques","authors":"Jinge Liu, Shuyu Wang","doi":"10.5121/csit.2022.120502","DOIUrl":"https://doi.org/10.5121/csit.2022.120502","url":null,"abstract":"Piano keyboard visualization was very popular right now, but there are very few virtual piano keyboard visualizations right now [1]. I was using unity to show the virtual piano keyboard and then they can play piano pieces by themselves or play a recording online [2]. After that you can listen and see how the recording pieces play it on the visual keyboard to give them a clear idea about how the songs played on a keyboard2 [3]. For those who played by themselves it can let them heard and know also when the visual piano play for them, they can tell if they have offbeat playing or they missing not. Piano4Play is an automated piano transcription and keyboard visualization system using AI and deep learning techniques. The user could upload a recorded piece of music, and our app would visualize the music on a digital piano keyboard. The user could see how the music is played visually in order to help piano beginners to see how the music will be played on piano in order to help them learn more quickly and easier, and advanced players could use the app to see whether they made any mistake when they are playing so they can get some improvement. Our app uses wav and MIDI files, repl, real-time database, google Collab and Unity.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90032127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-03-19DOI: 10.5121/csit.2022.120504
Darren Xu, Dexter Xu, Ang Li
Paywalls are a staple of the internet and seen in a vast amount of websites [1]. Encountering a paywall is always annoying, whether you’re doing work for school or just trying to catch up on the latest news [2]. To eliminate this annoyance we have created Wall Breaker, a google extension with the primary task of bypassing any paywall using a variety of methods [3]. Our extension uses methods such as opening the website in an incognito tab or acting as a new user when clicking on a link. Although not the first of its kind, our extension is truly unique in the methods and techniques used. The popup used is easy to use and simple to look at, providing the best user experience. Wall Breaker will work on most websites, both popular and lesser known ones. It makes no distinction between certain types of websites and the methods can be used on any page. While Wall Breaker might not work on every website those are few and far between.
{"title":"WebReview: An Intelligent Classification Platform to Automate the Evaluation and Ranking of Website Quality and Usability using Artificial Intelligence and Web Scraping Techniques","authors":"Darren Xu, Dexter Xu, Ang Li","doi":"10.5121/csit.2022.120504","DOIUrl":"https://doi.org/10.5121/csit.2022.120504","url":null,"abstract":"Paywalls are a staple of the internet and seen in a vast amount of websites [1]. Encountering a paywall is always annoying, whether you’re doing work for school or just trying to catch up on the latest news [2]. To eliminate this annoyance we have created Wall Breaker, a google extension with the primary task of bypassing any paywall using a variety of methods [3]. Our extension uses methods such as opening the website in an incognito tab or acting as a new user when clicking on a link. Although not the first of its kind, our extension is truly unique in the methods and techniques used. The popup used is easy to use and simple to look at, providing the best user experience. Wall Breaker will work on most websites, both popular and lesser known ones. It makes no distinction between certain types of websites and the methods can be used on any page. While Wall Breaker might not work on every website those are few and far between.","PeriodicalId":91205,"journal":{"name":"Artificial intelligence and applications (Commerce, Calif.)","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85722790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}