Pub Date : 2023-02-01DOI: 10.1016/j.tbench.2023.100092
Akshitha Sriraman
Modern web services such as social media, online messaging, and web search support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., hyperscale. The key challenge in enabling hyperscale web services arise from (1) an unprecedented growth in data, users, and service functionality and (2) a decline in hardware performance scaling. We highlight a dissertation’s contributions in bridging the software and hardware worlds to realize more efficient hyperscale services despite these challenges.
{"title":"Enabling hyperscale web services","authors":"Akshitha Sriraman","doi":"10.1016/j.tbench.2023.100092","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100092","url":null,"abstract":"<div><p>Modern web services such as social media, online messaging, and web search support billions of users, requiring data centers that scale to hundreds of thousands of servers, i.e., <em>hyperscale</em>. The key challenge in enabling hyperscale web services arise from (1) an unprecedented growth in data, users, and service functionality and (2) a decline in hardware performance scaling. We highlight a dissertation’s contributions in bridging the software and hardware worlds to realize more efficient hyperscale services despite these challenges.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"3 1","pages":"Article 100092"},"PeriodicalIF":0.0,"publicationDate":"2023-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49714575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100087
Xinyue Chu, Yizhou Wang, Qi Chen, Jiaquan Gao
In this study, we present an optimization sparse approximate inverse (SPAI) preconditioning algorithm on GPU, called GSPAI-Opt. In GSPAI-Opt, it fuses the advantages of two popular SPAI preconditioning algorithms, and has the following novelties: (1) an optimization strategy is proposed to choose whether to use the constant or non-constant thread group for any sparse pattern of the preprocessor, and (2) a parallel framework of optimizing the SPAI preconditioner is proposed on GPU, and (3) for each component of the preconditioner, a decision tree is established to choose the optimal kernel of computing it. Experimental results validate the effectiveness of GSPAI-Opt.
{"title":"Optimizing the sparse approximate inverse preconditioning algorithm on GPU","authors":"Xinyue Chu, Yizhou Wang, Qi Chen, Jiaquan Gao","doi":"10.1016/j.tbench.2023.100087","DOIUrl":"10.1016/j.tbench.2023.100087","url":null,"abstract":"<div><p>In this study, we present an optimization sparse approximate inverse (SPAI) preconditioning algorithm on GPU, called GSPAI-Opt. In GSPAI-Opt, it fuses the advantages of two popular SPAI preconditioning algorithms, and has the following novelties: (1) an optimization strategy is proposed to choose whether to use the constant or non-constant thread group for any sparse pattern of the preprocessor, and (2) a parallel framework of optimizing the SPAI preconditioner is proposed on GPU, and (3) for each component of the preconditioner, a decision tree is established to choose the optimal kernel of computing it. Experimental results validate the effectiveness of GSPAI-Opt.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000042/pdfft?md5=8592fe298c854dc9f2e85112414f0c44&pid=1-s2.0-S2772485923000042-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77940299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100090
Yunjie Liu, Jingwei Sun, Jiaqiang Liu, Guangzhong Sun
Deep neural networks are suffering from over parameterized high storage and high consumption problems. Pruning can effectively reduce storage and computation costs of deep neural networks by eliminating their redundant parameters. In existing pruning methods, filter pruning achieves more efficient inference, while element-wise pruning maintains better accuracy. To make a trade-off between the two endpoints, a variety of pruning patterns has been proposed. This study analyzes the performance characteristics of sparse DNNs pruned by different patterns, including element-wise, vector-wise, block-wise, and group-wise. Based on the analysis, we propose an efficient implementation of group-wise sparse DNN inference, which can make better use of GPUs. Experimental results on VGG, ResNet, BERT and ViT show that our optimized group-wise pruning pattern achieves much lower inference latency on GPU than other sparse patterns and the existing group-wise pattern implementation.
{"title":"Performance characterization and optimization of pruning patterns for sparse DNN inference","authors":"Yunjie Liu, Jingwei Sun, Jiaqiang Liu, Guangzhong Sun","doi":"10.1016/j.tbench.2023.100090","DOIUrl":"10.1016/j.tbench.2023.100090","url":null,"abstract":"<div><p>Deep neural networks are suffering from over parameterized high storage and high consumption problems. Pruning can effectively reduce storage and computation costs of deep neural networks by eliminating their redundant parameters. In existing pruning methods, filter pruning achieves more efficient inference, while element-wise pruning maintains better accuracy. To make a trade-off between the two endpoints, a variety of pruning patterns has been proposed. This study analyzes the performance characteristics of sparse DNNs pruned by different patterns, including element-wise, vector-wise, block-wise, and group-wise. Based on the analysis, we propose an efficient implementation of group-wise sparse DNN inference, which can make better use of GPUs. Experimental results on VGG, ResNet, BERT and ViT show that our optimized group-wise pruning pattern achieves much lower inference latency on GPU than other sparse patterns and the existing group-wise pattern implementation.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000078/pdfft?md5=47f436d7570515bb39cfffeda4376c89&pid=1-s2.0-S2772485923000078-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84120145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100103
{"title":"TBench (BenchCouncil Transactions on Benchmarks, Standards and Evaluations) Calls for Papers","authors":"","doi":"10.1016/j.tbench.2023.100103","DOIUrl":"10.1016/j.tbench.2023.100103","url":null,"abstract":"","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000200/pdfft?md5=ee27feb6477e417eae72f364987d955c&pid=1-s2.0-S2772485923000200-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90528369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100091
Simin Chen , Chunjie Luo , Wanling Gao , Lei Wang
As the Internet of Things (IoT) industry expands, the demand for microprocessors and microcontrollers used in IoT systems has increased steadily. Benchmarks provide a valuable reference for processor evaluation. Different IoT application scenarios face different data scales, dimensions, and types. However, the current popular benchmarks only evaluate the processor’s performance under fixed data formats. These benchmarks cannot adapt to the fragmented scenarios faced by processors. This paper proposes a new benchmark, namely IoTBench. The IoTBench workloads cover three types of algorithms commonly used in IoT applications: matrix processing, list operation, and convolution. Moreover, IoTBench divides the data space into different evaluation subspaces according to the data scales, data types, and data dimensions. We analyze the impact of different data types, data dimensions, and data scales on processor performance and compare ARM with RISC-V and MinorCPU with O3CPU using IoTBench. We also explored the performance of processors with different architecture configurations in different evaluation subspaces and found the optimal architecture of different evaluation subspaces. The specifications, source code, and results are publicly available from https://www.benchcouncil.org/iotbench/.
{"title":"IoTBench: A data centrical and configurable IoT benchmark suite","authors":"Simin Chen , Chunjie Luo , Wanling Gao , Lei Wang","doi":"10.1016/j.tbench.2023.100091","DOIUrl":"10.1016/j.tbench.2023.100091","url":null,"abstract":"<div><p>As the Internet of Things (IoT) industry expands, the demand for microprocessors and microcontrollers used in IoT systems has increased steadily. Benchmarks provide a valuable reference for processor evaluation. Different IoT application scenarios face different data scales, dimensions, and types. However, the current popular benchmarks only evaluate the processor’s performance under fixed data formats. These benchmarks cannot adapt to the fragmented scenarios faced by processors. This paper proposes a new benchmark, namely IoTBench. The IoTBench workloads cover three types of algorithms commonly used in IoT applications: matrix processing, list operation, and convolution. Moreover, IoTBench divides the data space into different evaluation subspaces according to the data scales, data types, and data dimensions. We analyze the impact of different data types, data dimensions, and data scales on processor performance and compare ARM with RISC-V and MinorCPU with O3CPU using IoTBench. We also explored the performance of processors with different architecture configurations in different evaluation subspaces and found the optimal architecture of different evaluation subspaces. The specifications, source code, and results are publicly available from <span>https://www.benchcouncil.org/iotbench/</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100091"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S277248592300008X/pdfft?md5=3e608f0131eab9659bc377156487a717&pid=1-s2.0-S277248592300008X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76076182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2022.100073
M. Javaid, Abid Haleem, R. Singh, R. Suman, Shahbaz Khan
{"title":"A review of Blockchain Technology applications for financial services","authors":"M. Javaid, Abid Haleem, R. Singh, R. Suman, Shahbaz Khan","doi":"10.1016/j.tbench.2022.100073","DOIUrl":"https://doi.org/10.1016/j.tbench.2022.100073","url":null,"abstract":"","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"407 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75474486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many emerging IoT–Edge–Cloud computing systems are not yet implemented or are too confidential to share the code or even tricky to replicate its execution environment, and hence their benchmarking is very challenging. This paper uses autonomous vehicles as a typical scenario to build the first benchmark for IoT–Edge–Cloud systems. We propose a set of distilling rules for replicating autonomous vehicle scenarios to extract critical tasks with intertwined interactions. The essential system-level and component-level characteristics are captured while the system complexity is reduced significantly so that users can quickly evaluate and pinpoint the system and component bottlenecks. Also, we implement a scalable architecture through which users can assess the systems with different sizes of workloads.
We conduct several experiments to measure the performance. After testing two thousand autonomous vehicle task requests, we identify the bottleneck modules in autonomous vehicle scenarios and analyze their hotspot functions. The experiment results show that the lane-keeping task is the slowest execution module, with a tail latency of 77.49 ms for the 99th percentile latency. We hope this scenario benchmark will be helpful for Autonomous Vehicles and even IoT–edge–Cloud research. Now the open-source code is available from the official website https://www.benchcouncil.org/scenariobench/edgeaibench.html.
{"title":"Edge AIBench 2.0: A scalable autonomous vehicle benchmark for IoT–Edge–Cloud systems","authors":"Tianshu Hao , Wanling Gao , Chuanxin Lan , Fei Tang , Zihan Jiang , Jianfeng Zhan","doi":"10.1016/j.tbench.2023.100086","DOIUrl":"10.1016/j.tbench.2023.100086","url":null,"abstract":"<div><p>Many emerging IoT–Edge–Cloud computing systems are not yet implemented or are too confidential to share the code or even tricky to replicate its execution environment, and hence their benchmarking is very challenging. This paper uses autonomous vehicles as a typical scenario to build the first benchmark for IoT–Edge–Cloud systems. We propose a set of distilling rules for replicating autonomous vehicle scenarios to extract critical tasks with intertwined interactions. The essential system-level and component-level characteristics are captured while the system complexity is reduced significantly so that users can quickly evaluate and pinpoint the system and component bottlenecks. Also, we implement a scalable architecture through which users can assess the systems with different sizes of workloads.</p><p>We conduct several experiments to measure the performance. After testing two thousand autonomous vehicle task requests, we identify the bottleneck modules in autonomous vehicle scenarios and analyze their hotspot functions. The experiment results show that the lane-keeping task is the slowest execution module, with a tail latency of 77.49 ms for the 99th percentile latency. We hope this scenario benchmark will be helpful for Autonomous Vehicles and even IoT–edge–Cloud research. Now the open-source code is available from the official website <span>https://www.benchcouncil.org/scenariobench/edgeaibench.html</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000030/pdfft?md5=f59a880c243b7557a7fcd0ca689dd1e8&pid=1-s2.0-S2772485923000030-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76164703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2022.100082
Jose Renau , Fangping Liu , Hongzhang Shan , Sang Wook Stephen Do
Simpoint technology (Sherwood et al., 2002) has been widely used by modern micro-architecture research community to significantly speedup the simulation time. However, the typical Simpoint size remains to be tens to hundreds of million instructions. At such sizes, the cycle-accurate simulators still need to run tens of hours or even days to finish the simulation, depending on the architecture complexity and workload characteristics. In this paper, we developed a new simulation framework by integrating LiveCache and Detail-warmups with Dromajo ( https://chipyard.readthedocs.io/en/latest/Tools/Dromajo.html) and Kabylkas et al. (2005), enabling us to use much smaller Simpoint size (2 million instructions) without loss of accuracy. Our evaluation results showed that the average simulation time can be accelerated by 9.56 times over 50M size and most of the workload simulations can be finished in tens of minutes instead of hours.
Simpoint技术(Sherwood et al., 2002)被现代微架构研究界广泛使用,可以显著加快仿真时间。然而,典型的Simpoint大小仍然是数千万到数亿条指令。在这样的规模下,周期精确的模拟器仍然需要运行数十小时甚至几天才能完成模拟,这取决于体系结构复杂性和工作负载特征。在本文中,我们通过将LiveCache和细节预热与Dromajo (https://chipyard.readthedocs.io/en/latest/Tools/Dromajo.html)和Kabylkas等人(2005)集成开发了一个新的模拟框架,使我们能够使用更小的Simpoint大小(200万条指令)而不会损失准确性。我们的评估结果表明,在50M大小的情况下,平均模拟时间可以加快9.56倍,大多数工作负载的模拟可以在几十分钟内完成,而不是几个小时。
{"title":"Enabling Reduced Simpoint Size Through LiveCache and Detail Warmup","authors":"Jose Renau , Fangping Liu , Hongzhang Shan , Sang Wook Stephen Do","doi":"10.1016/j.tbench.2022.100082","DOIUrl":"10.1016/j.tbench.2022.100082","url":null,"abstract":"<div><p>Simpoint technology (Sherwood et al., 2002) has been widely used by modern micro-architecture research community to significantly speedup the simulation time. However, the typical Simpoint size remains to be tens to hundreds of million instructions. At such sizes, the cycle-accurate simulators still need to run tens of hours or even days to finish the simulation, depending on the architecture complexity and workload characteristics. In this paper, we developed a new simulation framework by integrating LiveCache and Detail-warmups with Dromajo ( <span>https://chipyard.readthedocs.io/en/latest/Tools/Dromajo.html</span><svg><path></path></svg>) and Kabylkas et al. (2005), enabling us to use much smaller Simpoint size (2 million instructions) without loss of accuracy. Our evaluation results showed that the average simulation time can be accelerated by 9.56 times over 50M size and most of the workload simulations can be finished in tens of minutes instead of hours.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100082"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485922000692/pdfft?md5=1f63e574d9398dea849e86896a519786&pid=1-s2.0-S2772485922000692-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83497373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100088
Md. Milon Islam , Md. Zabirul Islam , Amanullah Asraf , Mabrook S. Al-Rakhami , Weiping Ding , Ali Hassan Sodhro
Combating the COVID-19 pandemic has emerged as one of the most promising issues in global healthcare. Accurate and fast diagnosis of COVID-19 cases is required for the right medical treatment to control this pandemic. Chest radiography imaging techniques are more effective than the reverse-transcription polymerase chain reaction (RT-PCR) method in detecting coronavirus. Due to the limited availability of medical images, transfer learning is better suited to classify patterns in medical images. This paper presents a combined architecture of convolutional neural network (CNN) and recurrent neural network (RNN) to diagnose COVID-19 patients from chest X-rays. The deep transfer techniques used in this experiment are VGG19, DenseNet121, InceptionV3, and Inception-ResNetV2, where CNN is used to extract complex features from samples and classify them using RNN. In our experiments, the VGG19-RNN architecture outperformed all other networks in terms of accuracy. Finally, decision-making regions of images were visualized using gradient-weighted class activation mapping (Grad-CAM). The system achieved promising results compared to other existing systems and might be validated in the future when more samples would be available. The experiment demonstrated a good alternative method to diagnose COVID-19 for medical staff.
All the data used during the study are openly available from the Mendeley data repository at https://data.mendeley.com/datasets/mxc6vb7svm. For further research, we have made the source code publicly available at https://github.com/Asraf047/COVID19-CNN-RNN.
{"title":"Diagnosis of COVID-19 from X-rays using combined CNN-RNN architecture with transfer learning","authors":"Md. Milon Islam , Md. Zabirul Islam , Amanullah Asraf , Mabrook S. Al-Rakhami , Weiping Ding , Ali Hassan Sodhro","doi":"10.1016/j.tbench.2023.100088","DOIUrl":"10.1016/j.tbench.2023.100088","url":null,"abstract":"<div><p>Combating the COVID-19 pandemic has emerged as one of the most promising issues in global healthcare. Accurate and fast diagnosis of COVID-19 cases is required for the right medical treatment to control this pandemic. Chest radiography imaging techniques are more effective than the reverse-transcription polymerase chain reaction (RT-PCR) method in detecting coronavirus. Due to the limited availability of medical images, transfer learning is better suited to classify patterns in medical images. This paper presents a combined architecture of convolutional neural network (CNN) and recurrent neural network (RNN) to diagnose COVID-19 patients from chest X-rays. The deep transfer techniques used in this experiment are VGG19, DenseNet121, InceptionV3, and Inception-ResNetV2, where CNN is used to extract complex features from samples and classify them using RNN. In our experiments, the VGG19-RNN architecture outperformed all other networks in terms of accuracy. Finally, decision-making regions of images were visualized using gradient-weighted class activation mapping (Grad-CAM). The system achieved promising results compared to other existing systems and might be validated in the future when more samples would be available. The experiment demonstrated a good alternative method to diagnose COVID-19 for medical staff.</p><p>All the data used during the study are openly available from the Mendeley data repository at <span>https://data.mendeley.com/datasets/mxc6vb7svm</span><svg><path></path></svg>. For further research, we have made the source code publicly available at <span>https://github.com/Asraf047/COVID19-CNN-RNN</span><svg><path></path></svg>.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000054/pdfft?md5=b7e11074cc12a010ea7f32743ca0e2a5&pid=1-s2.0-S2772485923000054-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81905664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-01DOI: 10.1016/j.tbench.2023.100085
Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Shahbaz Khan , Rajiv Suman
The Internet of Behaviour (IoB) is an effort to dissect behavioural patterns as explained by data collection. IoB is an extension of the Internet of Things (IoT). Therefore, both are anticipated to experience exponential growth in the upcoming years. Healthcare firms have many opportunities to employ IoB to provide individualised services and anticipate patients’ behaviour. As behaviour and analysis are closely related to psychology, many techniques exist to collect relevant data. The IoB improves the doctor’s and patient’s experience. As IoT and IoB are interconnected, IoB technology collects and analyses data depending on user activity. These offer a practical technique for developing real-time remote health monitoring systems. This technology aids in the optimisation of auto insurance premiums in the healthcare sector. It tries to alter patient behaviour in order to improve the treatment process. IoB has applications in various areas, including retail and entertainment, and has the potential to change the marketing sector significantly. This technology is helpful for the appropriate analysis and comprehension of behavioural data used for creating valuable services for treatment. The primary purpose of this paper is to study IoB and its need for healthcare. The working process structure and features of IoB for the healthcare domain are studied. This paper further identifies and analyses the significant applications of IoB for healthcare. In the future, IoB technologies will give us a higher quality of life and well-being. IoB is the ideal fusion of technology, data analytics, and behavioural science. This will help healthcare professionals collect data and analyse the patient’s behaviours for an efficient treatment process. The IoB will be the digital ecosystem’s intelligence in a few years.
{"title":"An extensive study on Internet of Behavior (IoB) enabled Healthcare-Systems: Features, facilitators, and challenges","authors":"Mohd Javaid , Abid Haleem , Ravi Pratap Singh , Shahbaz Khan , Rajiv Suman","doi":"10.1016/j.tbench.2023.100085","DOIUrl":"https://doi.org/10.1016/j.tbench.2023.100085","url":null,"abstract":"<div><p>The Internet of Behaviour (IoB) is an effort to dissect behavioural patterns as explained by data collection. IoB is an extension of the Internet of Things (IoT). Therefore, both are anticipated to experience exponential growth in the upcoming years. Healthcare firms have many opportunities to employ IoB to provide individualised services and anticipate patients’ behaviour. As behaviour and analysis are closely related to psychology, many techniques exist to collect relevant data. The IoB improves the doctor’s and patient’s experience. As IoT and IoB are interconnected, IoB technology collects and analyses data depending on user activity. These offer a practical technique for developing real-time remote health monitoring systems. This technology aids in the optimisation of auto insurance premiums in the healthcare sector. It tries to alter patient behaviour in order to improve the treatment process. IoB has applications in various areas, including retail and entertainment, and has the potential to change the marketing sector significantly. This technology is helpful for the appropriate analysis and comprehension of behavioural data used for creating valuable services for treatment. The primary purpose of this paper is to study IoB and its need for healthcare. The working process structure and features of IoB for the healthcare domain are studied. This paper further identifies and analyses the significant applications of IoB for healthcare. In the future, IoB technologies will give us a higher quality of life and well-being. IoB is the ideal fusion of technology, data analytics, and behavioural science. This will help healthcare professionals collect data and analyse the patient’s behaviours for an efficient treatment process. The IoB will be the digital ecosystem’s intelligence in a few years.</p></div>","PeriodicalId":100155,"journal":{"name":"BenchCouncil Transactions on Benchmarks, Standards and Evaluations","volume":"2 4","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772485923000029/pdfft?md5=1b694584d21ae12116e83f68c7b16f81&pid=1-s2.0-S2772485923000029-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137312966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}