Pub Date : 2022-07-26DOI: 10.36548/jucct.2022.2.006
D. Sasikala, K. Venkatesh Sharma
Athletics bureaucrats round the globe are tackling implausible encounters owing to the partial methods of customs executed by the athletes to progress their enactment in their sports. It embraces the intake of hormonal centred remedies or transfusion of blood to upsurge their power and the effect of their coaching. On the other hand, the up-to-date direct test of discovery of these circumstances embraces the laboratory-centred technique viz restricted for the reason that of the cost factors, handiness of medical experts, etc. This ends us to pursue for indirect assessments. By the emergent curiosity of Artificial Intelligence (AI) in healthcare, it is vital to put forward a process built on blood factors to advance decision making. In this research script, a statistical and machine learning (ML) centred tactic was suggested to ascertain the concern of doping constituent rhEPO in blood units.
{"title":"Augmentation for Blood Doping Discovery in Sports using Random Forest Ensembles with LightGBM","authors":"D. Sasikala, K. Venkatesh Sharma","doi":"10.36548/jucct.2022.2.006","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.006","url":null,"abstract":"Athletics bureaucrats round the globe are tackling implausible encounters owing to the partial methods of customs executed by the athletes to progress their enactment in their sports. It embraces the intake of hormonal centred remedies or transfusion of blood to upsurge their power and the effect of their coaching. On the other hand, the up-to-date direct test of discovery of these circumstances embraces the laboratory-centred technique viz restricted for the reason that of the cost factors, handiness of medical experts, etc. This ends us to pursue for indirect assessments. By the emergent curiosity of Artificial Intelligence (AI) in healthcare, it is vital to put forward a process built on blood factors to advance decision making. In this research script, a statistical and machine learning (ML) centred tactic was suggested to ascertain the concern of doping constituent rhEPO in blood units.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123579898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-25DOI: 10.36548/jucct.2022.2.005
S. Joseph
The ability to access, store, and process enormous volumes of data has significantly expanded due to technological advancements in computation, data storage, networks, and sensors. Large-scale data processing is becoming an increasingly important thing for both research and business. Clients, who are typically domain experts, face an enormous challenge and require assistance in handling huge amount of data's. Soft computing can indeed be characterised as a science of thought and logic that aids in navigating complex systems. This article is about the use of soft computing techniques to support data analysis in an intelligent manner.
{"title":"Review on Soft Computing in Data Analysis","authors":"S. Joseph","doi":"10.36548/jucct.2022.2.005","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.005","url":null,"abstract":"The ability to access, store, and process enormous volumes of data has significantly expanded due to technological advancements in computation, data storage, networks, and sensors. Large-scale data processing is becoming an increasingly important thing for both research and business. Clients, who are typically domain experts, face an enormous challenge and require assistance in handling huge amount of data's. Soft computing can indeed be characterised as a science of thought and logic that aids in navigating complex systems. This article is about the use of soft computing techniques to support data analysis in an intelligent manner.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126160038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-23DOI: 10.36548/jucct.2022.2.004
Rimlon Shibi, Ezhil Grace, G. Rashmi, D. N. Ponkumar
The Internet of Things (IoT) is a contemporary technology in today’s world by grabbing the industries, home and, research consideration with a firm stride. According to the research, the average number of IoT devices per household will be 50 million in this era. The evolution of IoT will make the existing household devices to be hoary and now it’s a good time to create IoT devices to be affordable for daily use by the survivors across the world. This research is to find out the productivity of implementing IoT and to avoid disturbing existing network architecture and the software Define Network (SDN) in the underdeveloped country like Malawi and the usage of IoT devices in every household, offices and in agriculture to enhance the development of Country.
{"title":"Finding the Productivity of Implementing IoT in Malawi and improve the usage of IoT devices to enhance nation Building: A survey","authors":"Rimlon Shibi, Ezhil Grace, G. Rashmi, D. N. Ponkumar","doi":"10.36548/jucct.2022.2.004","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.004","url":null,"abstract":"The Internet of Things (IoT) is a contemporary technology in today’s world by grabbing the industries, home and, research consideration with a firm stride. According to the research, the average number of IoT devices per household will be 50 million in this era. The evolution of IoT will make the existing household devices to be hoary and now it’s a good time to create IoT devices to be affordable for daily use by the survivors across the world. This research is to find out the productivity of implementing IoT and to avoid disturbing existing network architecture and the software Define Network (SDN) in the underdeveloped country like Malawi and the usage of IoT devices in every household, offices and in agriculture to enhance the development of Country.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128378237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: 10.36548/jucct.2022.2.003
Satheeshkumar Palanisamy
There has been tremendous growth for the need of analytics and BI tools in every organization, in every sector such as finance, software, medicine and even astronomy in order to better overall performance. C-factor Computing has the same vision of empowering their existing products through data analysis and forecasting to better suit the need of customers and decision making of stakeholders. The project involves 5 key aspects in Analytics - Data Acquisition, Big data or data Storage, Data Transformation (Unstructured to Structured), Data Wrangling, Predictive Modeling / Visualization. Data Acquisition involves gathering existing transactional and search data of customers and travel aggregators who use the product. This data is used to create powerful dashboards capable of predictive analytics which help the company make informed choices. The key aspects mentioned can be achieved through various tools available but requires testing at every stage in order to realize the appropriate software for the data present in the company. Hence the project deals with studying and implementing selected tools in order to provide the right framework to achieve an interactive dashboard capable of predictive analytics which can also be integrated into the existing products of the company.
{"title":"Predictive Analytics with Data Visualization","authors":"Satheeshkumar Palanisamy","doi":"10.36548/jucct.2022.2.003","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.003","url":null,"abstract":"There has been tremendous growth for the need of analytics and BI tools in every organization, in every sector such as finance, software, medicine and even astronomy in order to better overall performance. C-factor Computing has the same vision of empowering their existing products through data analysis and forecasting to better suit the need of customers and decision making of stakeholders. The project involves 5 key aspects in Analytics - Data Acquisition, Big data or data Storage, Data Transformation (Unstructured to Structured), Data Wrangling, Predictive Modeling / Visualization. Data Acquisition involves gathering existing transactional and search data of customers and travel aggregators who use the product. This data is used to create powerful dashboards capable of predictive analytics which help the company make informed choices. The key aspects mentioned can be achieved through various tools available but requires testing at every stage in order to realize the appropriate software for the data present in the company. Hence the project deals with studying and implementing selected tools in order to provide the right framework to achieve an interactive dashboard capable of predictive analytics which can also be integrated into the existing products of the company.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"119 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126299651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-08DOI: 10.36548/jucct.2022.2.002
S. Smys, Jennifer S. Raj
The introduction of Wi-Fi into the residences is creating a biological havoc among humans. A lot of research has been evolved and presented depicting the various imperfections caused by the radiation of Wi-Fi. To overcome this LiFi technology may be used for indoor communication instead of Wi-Fi. LiFi communication needs line of sight for communication. LiFi transfers the information through visible light. Light cannot travel through opaque objects. The various properties of light like Reflection, Refraction, scattering effects on visible light will lead to data loss. Hence LiFi is preferably used indoors. This article discusses on the effects of biological degradation caused by Wi-Fi, Bluetooth etc. in short, this article enlists the effects of radio waves in accordance with the psychological changes caused in mankind. This in turn will lead to build a system which will also ensure the safety of the ecosystem for the development of mankind.
{"title":"LiFi- Future Technology","authors":"S. Smys, Jennifer S. Raj","doi":"10.36548/jucct.2022.2.002","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.002","url":null,"abstract":"The introduction of Wi-Fi into the residences is creating a biological havoc among humans. A lot of research has been evolved and presented depicting the various imperfections caused by the radiation of Wi-Fi. To overcome this LiFi technology may be used for indoor communication instead of Wi-Fi. LiFi communication needs line of sight for communication. LiFi transfers the information through visible light. Light cannot travel through opaque objects. The various properties of light like Reflection, Refraction, scattering effects on visible light will lead to data loss. Hence LiFi is preferably used indoors. This article discusses on the effects of biological degradation caused by Wi-Fi, Bluetooth etc. in short, this article enlists the effects of radio waves in accordance with the psychological changes caused in mankind. This in turn will lead to build a system which will also ensure the safety of the ecosystem for the development of mankind.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121411099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-04DOI: 10.36548/jucct.2022.2.001
S. Ayyasamy
Metadata is an exploration of the given data. It organizes the data by grouping the collected information on a particular structure for easy understanding. Metadata reduces the computational burden on data mining algorithms by keeping an organized record. Data that are related to healthcare application requires serious attention on privacy concern and therefore such information are encrypted in most cases. Processing of encrypted data is difficult, and it may lead to estimate the prediction with a faulty output. Hence, a blockchain based data securing system is proposed in the work for securing the data that are transmitted from a ubiquitous computing device. The paper also incorporates the proposed work with a whale optimization algorithm for reducing the execution time required on the blockchain based data storage and retrieval process.
{"title":"Augmentation of AI-based Process for Blood Doping Discovery in Sports using Random Forest Ensembles with LightGBM","authors":"S. Ayyasamy","doi":"10.36548/jucct.2022.2.001","DOIUrl":"https://doi.org/10.36548/jucct.2022.2.001","url":null,"abstract":"Metadata is an exploration of the given data. It organizes the data by grouping the collected information on a particular structure for easy understanding. Metadata reduces the computational burden on data mining algorithms by keeping an organized record. Data that are related to healthcare application requires serious attention on privacy concern and therefore such information are encrypted in most cases. Processing of encrypted data is difficult, and it may lead to estimate the prediction with a faulty output. Hence, a blockchain based data securing system is proposed in the work for securing the data that are transmitted from a ubiquitous computing device. The paper also incorporates the proposed work with a whale optimization algorithm for reducing the execution time required on the blockchain based data storage and retrieval process.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130841918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-02-28DOI: 10.36548/jucct.2021.4.006
A. Sathesh, Y. B. Hamdan
In this study, the outcomes of trials with various projects are analyzed in detail. Estimators may decrease mistakes by combining several estimating strategies, which helps them maintain a close eye on the difference between their estimations and reality. An effort estimate is a method for estimating a model's correctness by calculating the total amount of effort needed. It's a major pain in the backside of software development. Several prediction methods have recently been created to find an appropriate estimate. The suggested SVM approach is utilized to reduce the estimation error for the project estimate to the lowest possible value. As a result, throughout the software sizing process, the ideal or exact forecast is achieved. Early in a model's development, the estimate is erroneous since the needs are not defined, but as the model evolves, it becomes more and more accurate. Because of this, it is critical to choose a precise estimate for each software model development. Observations and suggestions for further study of software sizing approaches are also included in the report.
{"title":"Analysis of Software Sizing and Project Estimation prediction by Machine Learning Classification","authors":"A. Sathesh, Y. B. Hamdan","doi":"10.36548/jucct.2021.4.006","DOIUrl":"https://doi.org/10.36548/jucct.2021.4.006","url":null,"abstract":"In this study, the outcomes of trials with various projects are analyzed in detail. Estimators may decrease mistakes by combining several estimating strategies, which helps them maintain a close eye on the difference between their estimations and reality. An effort estimate is a method for estimating a model's correctness by calculating the total amount of effort needed. It's a major pain in the backside of software development. Several prediction methods have recently been created to find an appropriate estimate. The suggested SVM approach is utilized to reduce the estimation error for the project estimate to the lowest possible value. As a result, throughout the software sizing process, the ideal or exact forecast is achieved. Early in a model's development, the estimate is erroneous since the needs are not defined, but as the model evolves, it becomes more and more accurate. Because of this, it is critical to choose a precise estimate for each software model development. Observations and suggestions for further study of software sizing approaches are also included in the report.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127077371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-31DOI: 10.36548/jucct.2021.4.005
E. Baraneetharan
Network connected hardware and software systems are always open to vulnerabilities when they are connected with an outdated firewall or an unknown Wi-Fi access. Therefore network based anti-virus software and intrusion detection systems are widely installed in every network connected hardwares. However, the pre-installed security softwares are not quite capable in identifying the attacks when evolved. Similarly, the traditional network security tools that are available in the current market are not efficient in handling the attacks when the system is connected with a cloud environment or IoT network. Hence, recent algorithms of security tools are incorporated with the deep learning network for improving its intrusion detection rate. The adaptability of deep learning network is comparatively high over the traditional software tools when it is employed with a feedback network. The feedback connections included in the deep learning networks produce a response signal to their own network connections as a training signal for improving their work performances. This improves the performances of deep learning-based security tools while it is in real-time operation. The motive of the work is to review and present the attainments of the deep learning-based vulnerability detection models along with their limitations.
{"title":"Review on Deep Learning based Network Security Tools in Detecting Real-Time Vulnerabilities","authors":"E. Baraneetharan","doi":"10.36548/jucct.2021.4.005","DOIUrl":"https://doi.org/10.36548/jucct.2021.4.005","url":null,"abstract":"Network connected hardware and software systems are always open to vulnerabilities when they are connected with an outdated firewall or an unknown Wi-Fi access. Therefore network based anti-virus software and intrusion detection systems are widely installed in every network connected hardwares. However, the pre-installed security softwares are not quite capable in identifying the attacks when evolved. Similarly, the traditional network security tools that are available in the current market are not efficient in handling the attacks when the system is connected with a cloud environment or IoT network. Hence, recent algorithms of security tools are incorporated with the deep learning network for improving its intrusion detection rate. The adaptability of deep learning network is comparatively high over the traditional software tools when it is employed with a feedback network. The feedback connections included in the deep learning networks produce a response signal to their own network connections as a training signal for improving their work performances. This improves the performances of deep learning-based security tools while it is in real-time operation. The motive of the work is to review and present the attainments of the deep learning-based vulnerability detection models along with their limitations.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131014448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-27DOI: 10.36548/jucct.2021.4.004
Dinesh Kumar Anguraj
To evaluate the quality of injection-molded components, conventional approaches are costly, time-consuming, or based on statistical process control characteristics that are not always accurate. Machine learning might be used to categorise components based on their quality. In order to accurately estimate the quality of injection moulded components, this study uses a SVM classifier. In addition, the form of the spare components after the working method product in simulation is classified as "qualified" or "unqualified". The quality indicators have an excellent association with data recordings from the original database of various sensors such as pressure and temperature used in the proposed network model for online prediction. The outliers are removed from the input original data to minimize the deviation of precision or prediction accuracy of the model performance metrics. Data points in the "to-be-confirmed" region (which is in the fit line area) may be misjudged by this statistical SVM model since it is placed between the "qualified" and "unqualified" areas. This statistical procedure in the proposed SVM model also uses Bayesian regularisation to classify final components into distinct quality levels.
{"title":"A Bayesian Regularization Approach to Predict the Quality of Injection-Moulded Components by statistical SVM for Online Monitoring system","authors":"Dinesh Kumar Anguraj","doi":"10.36548/jucct.2021.4.004","DOIUrl":"https://doi.org/10.36548/jucct.2021.4.004","url":null,"abstract":"To evaluate the quality of injection-molded components, conventional approaches are costly, time-consuming, or based on statistical process control characteristics that are not always accurate. Machine learning might be used to categorise components based on their quality. In order to accurately estimate the quality of injection moulded components, this study uses a SVM classifier. In addition, the form of the spare components after the working method product in simulation is classified as \"qualified\" or \"unqualified\". The quality indicators have an excellent association with data recordings from the original database of various sensors such as pressure and temperature used in the proposed network model for online prediction. The outliers are removed from the input original data to minimize the deviation of precision or prediction accuracy of the model performance metrics. Data points in the \"to-be-confirmed\" region (which is in the fit line area) may be misjudged by this statistical SVM model since it is placed between the \"qualified\" and \"unqualified\" areas. This statistical procedure in the proposed SVM model also uses Bayesian regularisation to classify final components into distinct quality levels.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128243628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-20DOI: 10.36548/jucct.2021.4.003
K. Geetha
Traders' tactics shift in response to the shifting market circumstances. The statistical features of price fluctuations may be significantly altered by the collective conduct of traders. When some changes in the market eventuate, a "regime shift" takes place. According to the observed directional shifts, this proposed study attempts to define what constitutes between normal and abnormal market regimes in the financial markets. The study begins by using data from ten financial marketplaces. For each call, a time frame in which major events may have led to regime change is chosen. Using the previous returns of all the companies in the index, this study investigates the usage of a CNN with SVM deep learning hybrid to anticipate the index's movement. The experiment findings reveal that this CNN model can successfully extract more generic and useful features than conventional technical indicators and produce more resilient and lucrative financial performance than earlier machine learning techniques. Most of the inability to forecast is due to randomness, and a small amount is due to non-stationarity. There is also a statistical correlation between the legal regimes of various marketplaces. Using this data, it is conceivable to tell the difference between normal regimes and lawful regimes. The results show that the stock market efficiency has never been tested before with such a large data set, and this is a significant step forward for weak-form market efficiency testing.
{"title":"Advanced Classification Technique to Detect the Changes of Regimes in Financial Markets by Hybrid CNN-based Prediction","authors":"K. Geetha","doi":"10.36548/jucct.2021.4.003","DOIUrl":"https://doi.org/10.36548/jucct.2021.4.003","url":null,"abstract":"Traders' tactics shift in response to the shifting market circumstances. The statistical features of price fluctuations may be significantly altered by the collective conduct of traders. When some changes in the market eventuate, a \"regime shift\" takes place. According to the observed directional shifts, this proposed study attempts to define what constitutes between normal and abnormal market regimes in the financial markets. The study begins by using data from ten financial marketplaces. For each call, a time frame in which major events may have led to regime change is chosen. Using the previous returns of all the companies in the index, this study investigates the usage of a CNN with SVM deep learning hybrid to anticipate the index's movement. The experiment findings reveal that this CNN model can successfully extract more generic and useful features than conventional technical indicators and produce more resilient and lucrative financial performance than earlier machine learning techniques. Most of the inability to forecast is due to randomness, and a small amount is due to non-stationarity. There is also a statistical correlation between the legal regimes of various marketplaces. Using this data, it is conceivable to tell the difference between normal regimes and lawful regimes. The results show that the stock market efficiency has never been tested before with such a large data set, and this is a significant step forward for weak-form market efficiency testing.","PeriodicalId":443052,"journal":{"name":"Journal of Ubiquitous Computing and Communication Technologies","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116807431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}