The fast growth in the population density in urban areas demands more facilities and resources. To meet the needs of city development, the use of Internet of Things (IoT) devices and the smart systems is the very quick and valuable source. However, thousands of IoT devices are interconnecting and communicating with each other over the Internet results in generating a huge amount of data, termed as Big Data. To integrate IoT services and processing Big Data in an efficient way aimed at smart city is a challenging task. Therefore, in this paper, we proposed a system for smart city development based on IoT using Big Data Analytics. We use sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensor, and surveillance objects, etc. initially a four-tier architecture is proposed, which includes 1) Bottom Tier: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The collected data from all smart system is processed at real-time to achieve smart cities using Hadoop with Spark, VoltDB, Storm or S4. We use existing datasets by various researchers including smart homes, smart parking weather, pollution, and vehicle for analysis and testing. All the datasets are replayed to test the real-time efficiency of the system. Finally, we evaluated the system by efficiency in term of throughput and processing.
{"title":"SMART CITY With IOT and BIG Data","authors":"Ajitpal Singh","doi":"10.2139/ssrn.3405839","DOIUrl":"https://doi.org/10.2139/ssrn.3405839","url":null,"abstract":"The fast growth in the population density in urban areas demands more facilities and resources. To meet the needs of city development, the use of Internet of Things (IoT) devices and the smart systems is the very quick and valuable source. However, thousands of IoT devices are interconnecting and communicating with each other over the Internet results in generating a huge amount of data, termed as Big Data. To integrate IoT services and processing Big Data in an efficient way aimed at smart city is a challenging task. Therefore, in this paper, we proposed a system for smart city development based on IoT using Big Data Analytics. We use sensors deployment including smart home sensors, vehicular networking, weather and water sensors, smart parking sensor, and surveillance objects, etc. initially a four-tier architecture is proposed, which includes 1) Bottom Tier: which is responsible for IoT sources, data generations, and collections 2) Intermediate Tier-1: That is responsible for all type of communication between sensors, relays, base stations, the internet, etc. 3) Intermediate Tier 2: it is responsible for data management and processing using Hadoop framework, and 4) Top tier: is responsible for application and usage of the data analysis and results generated. The collected data from all smart system is processed at real-time to achieve smart cities using Hadoop with Spark, VoltDB, Storm or S4. We use existing datasets by various researchers including smart homes, smart parking weather, pollution, and vehicle for analysis and testing. All the datasets are replayed to test the real-time efficiency of the system. Finally, we evaluated the system by efficiency in term of throughput and processing.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125696023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The significance of the Internet of Things is not that more and more devices, people and systems are ‘connected’ with one another. It is that the data generated from these ‘things’ is shared, processed, analysed and acted upon through new and innovative applications, applying completely new analysis methods and within significantly altered timeframes. The Internet of Things will drive Big Data, providing more information, from many different sources, in real-time, and allow us to gain completely new perspectives on the environments around us. The difference between machine-to-machine (M2M) and the Internet of Things (IoT) is driven by significant changes in the handling of data. In M2M, machine-generated data generally reflects well-defined data sets, communicated within established protocols and formats, and delivers well-defined alerts and notifications when values exceed their parameters. Applications in M2M make efficient use of this data as these applications have been developed hand-in-hand with what the characteristics of the data. In effect, application and data are intrinsically designed as one to meet the specific purposes of the application in fairly robust yet static model.
{"title":"Significance of NoSQL Databases with the Internet of Things","authors":"Ajitpal Singh","doi":"10.2139/ssrn.3401821","DOIUrl":"https://doi.org/10.2139/ssrn.3401821","url":null,"abstract":"The significance of the Internet of Things is not that more and more devices, people and systems are ‘connected’ with one another. It is that the data generated from these ‘things’ is shared, processed, analysed and acted upon through new and innovative applications, applying completely new analysis methods and within significantly altered timeframes. The Internet of Things will drive Big Data, providing more information, from many different sources, in real-time, and allow us to gain completely new perspectives on the environments around us. The difference between machine-to-machine (M2M) and the Internet of Things (IoT) is driven by significant changes in the handling of data. In M2M, machine-generated data generally reflects well-defined data sets, communicated within established protocols and formats, and delivers well-defined alerts and notifications when values exceed their parameters. Applications in M2M make efficient use of this data as these applications have been developed hand-in-hand with what the characteristics of the data. In effect, application and data are intrinsically designed as one to meet the specific purposes of the application in fairly robust yet static model.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121051244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Barinov, Vadim Arasev, Andreas Fackler, Vladimir Komendantskiy, Andrew Gross, A. Kolotov, Daria Isakova
In this paper we introduce POSDAO, a Proof of Stake (POS) algorithm implemented as a decentralized autonomous organization (DAO). It is designed to provide a decentralized, fair, and energy efficient consensus for public blockchains. The algorithm works as a set of smart contracts written in Solidity. POSDAO is implemented with a general purpose BFT consensus protocol such as Authority Round (AuRa) with a proposer node and probabilistic finality, or Honey Badger BFT (HBBFT), leaderless and with instant finality. Validators are incentivized to behave in the best interests of a network through a configurable reward structure. The algorithm provides a Sybil control mechanism for managing a set of validators, distributing rewards, and reporting and penalizing malicious validators. The authors provide a reference POSDAO implementation, xDai DPOS, which uses xDai as a stable transactional coin and a representative ERC20 token (DPOS) as a staking token. The reference implementation functions on an Ethereum 1.0 sidechain and utilizes the AuRa consensus protocol. Assets are bridged between the Ethereum mainnet and the xDai DPOS network using several instances of the POA TokenBridge.
{"title":"POSDAO: Proof of Stake Decentralized Autonomous Organization","authors":"I. Barinov, Vadim Arasev, Andreas Fackler, Vladimir Komendantskiy, Andrew Gross, A. Kolotov, Daria Isakova","doi":"10.2139/ssrn.3368483","DOIUrl":"https://doi.org/10.2139/ssrn.3368483","url":null,"abstract":"In this paper we introduce POSDAO, a Proof of Stake (POS) algorithm implemented as a decentralized autonomous organization (DAO). It is designed to provide a decentralized, fair, and energy efficient consensus for public blockchains. The algorithm works as a set of smart contracts written in Solidity. POSDAO is implemented with a general purpose BFT consensus protocol such as Authority Round (AuRa) with a proposer node and probabilistic finality, or Honey Badger BFT (HBBFT), leaderless and with instant finality. Validators are incentivized to behave in the best interests of a network through a configurable reward structure. The algorithm provides a Sybil control mechanism for managing a set of validators, distributing rewards, and reporting and penalizing malicious validators. The authors provide a reference POSDAO implementation, xDai DPOS, which uses xDai as a stable transactional coin and a representative ERC20 token (DPOS) as a staking token. The reference implementation functions on an Ethereum 1.0 sidechain and utilizes the AuRa consensus protocol. Assets are bridged between the Ethereum mainnet and the xDai DPOS network using several instances of the POA TokenBridge.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"3368 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127499786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article we introduce the term "Deep Execution" that utilize deep reinforcement learning (DRL) for optimal execution. We demonstrate two different approaches to solve for the optimal execution: (1) the deep double Q-network (DDQN), a value-based approach and (2) the proximal policy optimization (PPO) a policy-based approach, for trading and beating market benchmarks, such as the time-weighted average price (TWAP). We show that, firstly, the DRL can reach the theoretically derived optimum by acting on the environment directly. Secondly, the DRL agents can learn to capitalize on price trends (alpha signals) without directly observing the price. Finally, the DRL can take advantage of the available information to create dynamic strategies as an informed trader and thus outperform static benchmark strategies such as the TWAP.
{"title":"Deep Execution - Value and Policy Based Reinforcement Learning for Trading and Beating Market Benchmarks","authors":"Kevin Dabérius, Elvin Granat, P. Karlsson","doi":"10.2139/ssrn.3374766","DOIUrl":"https://doi.org/10.2139/ssrn.3374766","url":null,"abstract":"In this article we introduce the term \"Deep Execution\" that utilize deep reinforcement learning (DRL) for optimal execution. We demonstrate two different approaches to solve for the optimal execution: (1) the deep double Q-network (DDQN), a value-based approach and (2) the proximal policy optimization (PPO) a policy-based approach, for trading and beating market benchmarks, such as the time-weighted average price (TWAP). We show that, firstly, the DRL can reach the theoretically derived optimum by acting on the environment directly. Secondly, the DRL agents can learn to capitalize on price trends (alpha signals) without directly observing the price. Finally, the DRL can take advantage of the available information to create dynamic strategies as an informed trader and thus outperform static benchmark strategies such as the TWAP.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114710890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-03-31DOI: 10.34218/IJARET.10.2.2019.009
Syeda Gauhar Fatima, Syeda Kausar Fatima, Syed Mohd Ali, Naseer Ahmed Khan, Syed Adil
Nowadays there is a brief usage and dependability of pulsating features of the smart devices. So there is a need of connecting these devices through internet to explore their functionality. In the world of Smart homes there exists different systems but failed to provide many functionalities like controlling the devices remotely, overhead in communication, and energy efficiency. This paper delivers the solution to the constraint of the existing systems. The monitoring and controlling of smart organized homes environmental and safety parameters a Wi-Fi based WSN system is designed. A lightweight MQTT protocol is used for interactions between devices and user. The user can flawlessly control and monitor the devices via Android Application using a Graphical User Interface (GUI) remotely.
{"title":"Home Automation System With Wsn and IoT","authors":"Syeda Gauhar Fatima, Syeda Kausar Fatima, Syed Mohd Ali, Naseer Ahmed Khan, Syed Adil","doi":"10.34218/IJARET.10.2.2019.009","DOIUrl":"https://doi.org/10.34218/IJARET.10.2.2019.009","url":null,"abstract":"Nowadays there is a brief usage and dependability of pulsating features of the smart devices. So there is a need of connecting these devices through internet to explore their functionality. In the world of Smart homes there exists different systems but failed to provide many functionalities like controlling the devices remotely, overhead in communication, and energy efficiency. This paper delivers the solution to the constraint of the existing systems. The monitoring and controlling of smart organized homes environmental and safety parameters a Wi-Fi based WSN system is designed. A lightweight MQTT protocol is used for interactions between devices and user. The user can flawlessly control and monitor the devices via Android Application using a Graphical User Interface (GUI) remotely.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tremendous accumulations of shopper audits for items are currently accessible on the Web. These audits contain rich stubborn data on different items. They have turned into an important asset to encourage shoppers in understanding the items preceding settling on buying choices, and bolster makers in fathoming purchaser suppositions to successfully enhance the item contributions. In any case, such audits are frequently sloppy, prompting trouble in data route and information securing. It is wasteful for clients to accumulate general suppositions on an item by perusing all the shopper audits and physically investigating assessments on each survey. In this undertaking, we can actualize item surveys rating from item audits, which intend to naturally distinguish critical item perspectives from online buyer surveys. The imperative viewpoints are recognized by two perceptions: the vital parts of an item are typically remarked by an expansive number of shoppers; and buyers' conclusions on the essential angles significantly impact their general sentiments on the item. Specifically, given customer audits of an item, we initially recognize the item angles by marking the surveys and decide buyers' feelings on these perspectives by means of a slant classifier. The proposed research can be execute SVM and Naive Bayes arrangement to recognize the supposition words by at the same time thinking about the surveys gathering and the impact of purchasers' assessments given to every perspective on their general sentiments. The exploratory outcomes on prevalent portable item surveys show the adequacy of our approach. We additionally apply the survey positioning outcomes to the utilization of assessment order, and enhance the execution essentially.
{"title":"Aspect Based Sentiment Analysis for E-Commerce Using Classification Techniques","authors":"A. K, V. j","doi":"10.2139/ssrn.3362346","DOIUrl":"https://doi.org/10.2139/ssrn.3362346","url":null,"abstract":"Tremendous accumulations of shopper audits for items are currently accessible on the Web. These audits contain rich stubborn data on different items. They have turned into an important asset to encourage shoppers in understanding the items preceding settling on buying choices, and bolster makers in fathoming purchaser suppositions to successfully enhance the item contributions. In any case, such audits are frequently sloppy, prompting trouble in data route and information securing. It is wasteful for clients to accumulate general suppositions on an item by perusing all the shopper audits and physically investigating assessments on each survey. In this undertaking, we can actualize item surveys rating from item audits, which intend to naturally distinguish critical item perspectives from online buyer surveys. The imperative viewpoints are recognized by two perceptions: the vital parts of an item are typically remarked by an expansive number of shoppers; and buyers' conclusions on the essential angles significantly impact their general sentiments on the item. Specifically, given customer audits of an item, we initially recognize the item angles by marking the surveys and decide buyers' feelings on these perspectives by means of a slant classifier. The proposed research can be execute SVM and Naive Bayes arrangement to recognize the supposition words by at the same time thinking about the surveys gathering and the impact of purchasers' assessments given to every perspective on their general sentiments. The exploratory outcomes on prevalent portable item surveys show the adequacy of our approach. We additionally apply the survey positioning outcomes to the utilization of assessment order, and enhance the execution essentially.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122813852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The US mortgage market is of paramount economic and financial importance. While the causes of the Global Financial Crisis (GFC) remain a subject of vigorous debate, lax lending standards and opacity surrounding innovations in securitization are often cited as central issues. A decade following the Global Financial Crisis, we have demonstrated that digital tools have been developed in the mortgage space that have the potential to allow investors to form a clear view of the investment risks and opportunities, and policymakers to design regulations with a complete view of the behavior of all participants: borrowers, underwriters, servicers and investors. While big data tools have been around for an extended period, it is only recently that advanced techniques have come to the market that allow for more cost-effective analysis. The latest enhancement is the application of AI to this data to unify the information across disparate data sets. We have seen demonstrations of the power of these techniques in analyzing business models for financial institutions, and for informing policymakers about the implications of their decisions across broad categories of actors in this market. Looking ahead, the analysis performed here can be extended by matching loans across time as well as between different data sets, and through applications to different markets and countries.
{"title":"Utilizing Digital Tools for the Surveillance of the US Mortgage Market","authors":"Li Chang, Richard Koss","doi":"10.2139/ssrn.3362477","DOIUrl":"https://doi.org/10.2139/ssrn.3362477","url":null,"abstract":"The US mortgage market is of paramount economic and financial importance. While the causes of the Global Financial Crisis (GFC) remain a subject of vigorous debate, lax lending standards and opacity surrounding innovations in securitization are often cited as central issues. A decade following the Global Financial Crisis, we have demonstrated that digital tools have been developed in the mortgage space that have the potential to allow investors to form a clear view of the investment risks and opportunities, and policymakers to design regulations with a complete view of the behavior of all participants: borrowers, underwriters, servicers and investors. While big data tools have been around for an extended period, it is only recently that advanced techniques have come to the market that allow for more cost-effective analysis. The latest enhancement is the application of AI to this data to unify the information across disparate data sets. We have seen demonstrations of the power of these techniques in analyzing business models for financial institutions, and for informing policymakers about the implications of their decisions across broad categories of actors in this market. Looking ahead, the analysis performed here can be extended by matching loans across time as well as between different data sets, and through applications to different markets and countries.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In present era, data is the most valuable assets for everyone. We all share data to each other via any mode of communication. For secure communication data can be encrypted during its transmission. There are many techniques available to encrypt data in which Playfair Cipher is best known for its multiple letter encryptions. It is highly tedious for opponent to decipher the ciphertext encrypted using Playfair. The goal of this research paper is to provide security for the alphanumeric data during its transmission.
In this paper we outline the pros and cons of classical Playfair cipher. Encryption process of classical Playfair cipher uses a matrix in which letters are arranged in 5 rows and 5 columns. Arrangement of letters is based on a keyword (non repeating letters). This encryption process is only for alphabets and numeric data cannot be encrypted. So, by using alphanumeric keyword we have proposed an enhancement to the existing encryption process which is constructed in new arrangement i.e. tracks and sectors.
{"title":"A Modified Circular Version of Playfair Cipher","authors":"Ashish Pandey, Neelendra Badal","doi":"10.2139/ssrn.3351022","DOIUrl":"https://doi.org/10.2139/ssrn.3351022","url":null,"abstract":"In present era, data is the most valuable assets for everyone. We all share data to each other via any mode of communication. For secure communication data can be encrypted during its transmission. There are many techniques available to encrypt data in which Playfair Cipher is best known for its multiple letter encryptions. It is highly tedious for opponent to decipher the ciphertext encrypted using Playfair. The goal of this research paper is to provide security for the alphanumeric data during its transmission.<br><br>In this paper we outline the pros and cons of classical Playfair cipher. Encryption process of classical Playfair cipher uses a matrix in which letters are arranged in 5 rows and 5 columns. Arrangement of letters is based on a keyword (non repeating letters). This encryption process is only for alphabets and numeric data cannot be encrypted. So, by using alphanumeric keyword we have proposed an enhancement to the existing encryption process which is constructed in new arrangement i.e. tracks and sectors.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115514009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now a day’s water pollution is one of the biggest fears for the green globalization. To prevent the water pollution, first we have to estimate the water parameters like pH, turbidity, temperature and TDS as the variations in the values of these parameters point towards the presence of pollutants. In this project we design and develop a low cost system for real time monitoring of the water quality in IoT. Nowadays Internet of Things (IoT) and Remote Sensing (RS) techniques are used in different area of research for monitoring, collecting and analysis data from remote locations. This project proposes a Sensor-Based Water Quality Monitoring System which is used for measuring physical and chemical parameters of the water. The parameters such as Temperature, pH, TDS and water level of the water can be measured and the system provides a visual image of interior part of water container using raspberry pi camera. The measured values from the sensors can be processed by the core controller. The Raspberry Pi model can be used as a core controller. Finally, the sensor data can be viewed on internet using cloud storage (Thingspeak). The system also provides an alert to a remote user, when there is a deviation of water quality parameters from the predefined set of standard values. The uniqueness of our proposed paper is to obtain the water monitoring system with high frequency, high mobility, and low powered.
{"title":"Smart and Low Cost Real Time Water Quality Monitoring System Using IoT","authors":"Jeba Anandh S, A. M, Aswinrajan J, K. G, K. P","doi":"10.2139/ssrn.3350297","DOIUrl":"https://doi.org/10.2139/ssrn.3350297","url":null,"abstract":"Now a day’s water pollution is one of the biggest fears for the green globalization. To prevent the water pollution, first we have to estimate the water parameters like pH, turbidity, temperature and TDS as the variations in the values of these parameters point towards the presence of pollutants. In this project we design and develop a low cost system for real time monitoring of the water quality in IoT. Nowadays Internet of Things (IoT) and Remote Sensing (RS) techniques are used in different area of research for monitoring, collecting and analysis data from remote locations. This project proposes a Sensor-Based Water Quality Monitoring System which is used for measuring physical and chemical parameters of the water. The parameters such as Temperature, pH, TDS and water level of the water can be measured and the system provides a visual image of interior part of water container using raspberry pi camera. The measured values from the sensors can be processed by the core controller. The Raspberry Pi model can be used as a core controller. Finally, the sensor data can be viewed on internet using cloud storage (Thingspeak). The system also provides an alert to a remote user, when there is a deviation of water quality parameters from the predefined set of standard values. The uniqueness of our proposed paper is to obtain the water monitoring system with high frequency, high mobility, and low powered.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129511549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper analyzes the impact of including and excluding time when there is a regime change in the explanatory power of a two independent Bernoulli factors driving an observable process Y in order to provide a counterexample illustrating the importance of time as an independent in a tree based forecast.
{"title":"Impact of Time as an Independent in Tree Based Forecasting","authors":"T. Leitch","doi":"10.2139/ssrn.3345995","DOIUrl":"https://doi.org/10.2139/ssrn.3345995","url":null,"abstract":"This paper analyzes the impact of including and excluding time when there is a regime change in the explanatory power of a two independent Bernoulli factors driving an observable process Y in order to provide a counterexample illustrating the importance of time as an independent in a tree based forecast.","PeriodicalId":406666,"journal":{"name":"Applied Computing eJournal","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121388610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}