{"title":"Difference between Artificial Intelligence of the Internet of Things and the Traditional Internet","authors":"Yong Zhu, Lang Liu","doi":"10.23919/WAC55640.2022.9934002","DOIUrl":null,"url":null,"abstract":"From the birth of the Internet to the present, research on the Internet has been going on. Especially development with the integration of new technologies such as Artificial Intelligence (AI), big data, and the Internet of Things (IoT), with the Internet, people are paying more and more attention to the Internet, and the research on the differences between the IoT AI and the traditional Internet is also becoming more and more intense. The purpose of this article is to study the difference between the AI of the IoT and the traditional Internet. This article first summarizes the differences between the IoT AI and the traditional Internet, and then designs the IoT AI architecture system based on the BP neural network, and gives a detailed description of each layer in the architecture. This article introduces the hardware and software environment for deploying the AI system architecture of the IoT, and verifies the usability of the proposed architecture through performance tests. Through the average response time test of the cloud computing layer deployed with single and dual server load balancing, the data shows that when the number of concurrent clients is 1, the single server response time is 18.4ms, and the dual server response time is 17.2ms. When the number is 900, the single server response time is 573.6ms, and the dual server response time is 237.7ms. It can be seen that as the number of concurrent threads increases, the response time of the server is also longer. Moreover, under the condition of equal concurrent threads, the response time of two servers is shorter than that of the cloud computing layer deployed by a single server.","PeriodicalId":339737,"journal":{"name":"2022 World Automation Congress (WAC)","volume":"17 3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 World Automation Congress (WAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/WAC55640.2022.9934002","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
From the birth of the Internet to the present, research on the Internet has been going on. Especially development with the integration of new technologies such as Artificial Intelligence (AI), big data, and the Internet of Things (IoT), with the Internet, people are paying more and more attention to the Internet, and the research on the differences between the IoT AI and the traditional Internet is also becoming more and more intense. The purpose of this article is to study the difference between the AI of the IoT and the traditional Internet. This article first summarizes the differences between the IoT AI and the traditional Internet, and then designs the IoT AI architecture system based on the BP neural network, and gives a detailed description of each layer in the architecture. This article introduces the hardware and software environment for deploying the AI system architecture of the IoT, and verifies the usability of the proposed architecture through performance tests. Through the average response time test of the cloud computing layer deployed with single and dual server load balancing, the data shows that when the number of concurrent clients is 1, the single server response time is 18.4ms, and the dual server response time is 17.2ms. When the number is 900, the single server response time is 573.6ms, and the dual server response time is 237.7ms. It can be seen that as the number of concurrent threads increases, the response time of the server is also longer. Moreover, under the condition of equal concurrent threads, the response time of two servers is shorter than that of the cloud computing layer deployed by a single server.