首页 > 最新文献

J. Inf. Process. Syst.最新文献

英文 中文
New Two-Level L1 Data Cache Bypassing Technique for High Performance GPUs 一种新的高性能gpu双级L1数据缓存绕过技术
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.01.0062
Gwang Bok Kim, C. Kim
On-chip caches of graphics processing units (GPUs) have contributed to improved GPU performance by reducing long memory access latency. However, cache efficiency remains low despite the facts that recent GPUs have considerably mitigated the bottleneck problem of L1 data cache. Although the cache miss rate is a reasonable metric for cache efficiency, it is not necessarily proportional to GPU performance. In this study, we introduce a second key determinant to overcome the problem of predicting the performance gains from L1 data cache based on the assumption that miss rate only is not accurate. The proposed technique estimates the benefits of the cache by measuring the balance between cache efficiency and throughput. The throughput of the cache is predicted based on the warp occupancy information in the warp pool. Then, the warp occupancy is used for a second bypass phase when workloads show an ambiguous miss rate. In our proposed architecture, the L1 data cache is turned off for a long period when the warp occupancy is not high. Our two-level bypassing technique can be applied to recent GPU models and improves the performance by 6% on average compared to the architecture without bypassing. Moreover, it outperforms the conventional bottleneck-based bypassing techniques.
图形处理单元(GPU)的片上缓存通过减少长内存访问延迟来提高GPU性能。然而,尽管最近的gpu已经大大缓解了L1数据缓存的瓶颈问题,但缓存效率仍然很低。虽然缓存缺失率是缓存效率的合理指标,但它不一定与GPU性能成正比。在本研究中,我们引入了第二个关键决定因素,以克服基于仅缺失率不准确的假设来预测L1数据缓存的性能增益的问题。提出的技术通过测量缓存效率和吞吐量之间的平衡来估计缓存的好处。缓存的吞吐量是基于warp池中的warp占用信息来预测的。然后,当工作负载显示不明确的遗漏率时,曲速占用用于第二个旁路阶段。在我们提出的体系结构中,当warp占用率不高时,L1数据缓存将关闭很长一段时间。我们的两级旁路技术可以应用于最新的GPU模型,与没有旁路的架构相比,性能平均提高了6%。此外,它优于传统的基于瓶颈的旁路技术。
{"title":"New Two-Level L1 Data Cache Bypassing Technique for High Performance GPUs","authors":"Gwang Bok Kim, C. Kim","doi":"10.3745/JIPS.01.0062","DOIUrl":"https://doi.org/10.3745/JIPS.01.0062","url":null,"abstract":"On-chip caches of graphics processing units (GPUs) have contributed to improved GPU performance by reducing long memory access latency. However, cache efficiency remains low despite the facts that recent GPUs have considerably mitigated the bottleneck problem of L1 data cache. Although the cache miss rate is a reasonable metric for cache efficiency, it is not necessarily proportional to GPU performance. In this study, we introduce a second key determinant to overcome the problem of predicting the performance gains from L1 data cache based on the assumption that miss rate only is not accurate. The proposed technique estimates the benefits of the cache by measuring the balance between cache efficiency and throughput. The throughput of the cache is predicted based on the warp occupancy information in the warp pool. Then, the warp occupancy is used for a second bypass phase when workloads show an ambiguous miss rate. In our proposed architecture, the L1 data cache is turned off for a long period when the warp occupancy is not high. Our two-level bypassing technique can be applied to recent GPU models and improves the performance by 6% on average compared to the architecture without bypassing. Moreover, it outperforms the conventional bottleneck-based bypassing techniques.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132065838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Future of Quantum Information: Challenges and Vision 量子信息的未来:挑战与展望
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.01.0063
Dohyun Kim, Jungho Kang, Tae Woo Kim, Yi Pan, Jong Hyuk Park
Quantum information has passed the theoretical research period and has entered the realization step for its application to the information and communications technology (ICT) sector. Currently, quantum information has the advantage of being safer and faster than conventional digital computers. Thus, a lot of research is being done. The amount of big data that one needs to deal with is expected to grow exponentially. It is also a new business model that can change the landscape of the existing computing. Just as the IT sector has faced many challenges in the past, we need to be prepared for change brought about by Quantum. We would like to look at studies on quantum communication, quantum sensing, and quantum computing based on quantum information and see the technology levels of each country and company. Based on this, we present the vision and challenge for quantum information in the future. Our work is significant since the time for first-time study challengers is reduced by discussing the fundamentals of quantum information and summarizing the current situation.
量子信息已经过了理论研究阶段,进入了将其应用于信息通信技术领域的实现阶段。目前,量子信息具有比传统数字计算机更安全、更快的优势。因此,人们正在进行大量的研究。人们需要处理的大数据量预计将呈指数级增长。它也是一种新的商业模式,可以改变现有计算的格局。正如过去IT行业面临许多挑战一样,我们需要为量子带来的变化做好准备。我们想看看基于量子信息的量子通信、量子传感和量子计算的研究,看看每个国家和公司的技术水平。在此基础上,提出了未来量子信息的发展前景和挑战。我们的工作意义重大,因为通过讨论量子信息的基本原理和总结现状,减少了首次学习挑战者的时间。
{"title":"The Future of Quantum Information: Challenges and Vision","authors":"Dohyun Kim, Jungho Kang, Tae Woo Kim, Yi Pan, Jong Hyuk Park","doi":"10.3745/JIPS.01.0063","DOIUrl":"https://doi.org/10.3745/JIPS.01.0063","url":null,"abstract":"Quantum information has passed the theoretical research period and has entered the realization step for its application to the information and communications technology (ICT) sector. Currently, quantum information has the advantage of being safer and faster than conventional digital computers. Thus, a lot of research is being done. The amount of big data that one needs to deal with is expected to grow exponentially. It is also a new business model that can change the landscape of the existing computing. Just as the IT sector has faced many challenges in the past, we need to be prepared for change brought about by Quantum. We would like to look at studies on quantum communication, quantum sensing, and quantum computing based on quantum information and see the technology levels of each country and company. Based on this, we present the vision and challenge for quantum information in the future. Our work is significant since the time for first-time study challengers is reduced by discussing the fundamentals of quantum information and summarizing the current situation.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128585907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Re-SSS: Rebalancing Imbalanced Data Using Safe Sample Screening Re-SSS:使用安全样本筛选重新平衡不平衡数据
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.01.0065
Hongbo Shi, Xin Chen, Mingzhe Guo
Different samples can have different effects on learning support vector machine (SVM) classifiers. To rebalance an imbalanced dataset, it is reasonable to reduce non-informative samples and add informative samples for learning classifiers. Safe sample screening can identify a part of non-informative samples and retain informative samples. This study developed a resampling algorithm for Rebalancing imbalanced data using Safe Sample Screening (Re-SSS), which is composed of selecting Informative Samples (Re-SSS-IS) and rebalancing via a Weighted SMOTE (Re-SSS-WSMOTE). The Re-SSS-IS selects informative samples from the majority class, and determines a suitable regularization parameter for SVM, while the Re-SSS-WSMOTE generates informative minority samples. Both Re-SSS-IS and Re-SSS-WSMOTE are based on safe sampling screening. The experimental results show that Re-SSS can effectively improve the classification performance of imbalanced classification problems.
不同的样本对学习支持向量机(SVM)分类器有不同的影响。为了平衡一个不平衡的数据集,减少非信息样本,增加信息样本来学习分类器是合理的。安全样本筛选可以识别出一部分非信息性样本,保留信息性样本。本研究开发了一种利用安全样本筛选(Re-SSS)对失衡数据进行再平衡的重采样算法,该算法由选择信息样本(Re-SSS- is)和通过加权SMOTE (Re-SSS- wsmote)进行再平衡组成。Re-SSS-IS从多数类中选择信息样本,并确定适合SVM的正则化参数,Re-SSS-WSMOTE生成信息少数派样本。Re-SSS-IS和Re-SSS-WSMOTE都是基于安全抽样筛选。实验结果表明,Re-SSS可以有效地提高不平衡分类问题的分类性能。
{"title":"Re-SSS: Rebalancing Imbalanced Data Using Safe Sample Screening","authors":"Hongbo Shi, Xin Chen, Mingzhe Guo","doi":"10.3745/JIPS.01.0065","DOIUrl":"https://doi.org/10.3745/JIPS.01.0065","url":null,"abstract":"Different samples can have different effects on learning support vector machine (SVM) classifiers. To rebalance an imbalanced dataset, it is reasonable to reduce non-informative samples and add informative samples for learning classifiers. Safe sample screening can identify a part of non-informative samples and retain informative samples. This study developed a resampling algorithm for Rebalancing imbalanced data using Safe Sample Screening (Re-SSS), which is composed of selecting Informative Samples (Re-SSS-IS) and rebalancing via a Weighted SMOTE (Re-SSS-WSMOTE). The Re-SSS-IS selects informative samples from the majority class, and determines a suitable regularization parameter for SVM, while the Re-SSS-WSMOTE generates informative minority samples. Both Re-SSS-IS and Re-SSS-WSMOTE are based on safe sampling screening. The experimental results show that Re-SSS can effectively improve the classification performance of imbalanced classification problems.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On-Demand Remote Software Code Execution Unit Using On-Chip Flash Memory Cloudification for IoT Environment Acceleration 使用片上闪存云化实现物联网环境加速的按需远程软件代码执行单元
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.01.0064
Dongkyu Lee, Moon Gi Seok, Daejin Park
In an Internet of Things (IoT)-configured system, each device executes on-chip software. Recent IoT devices require fast execution time of complex services, such as analyzing a large amount of data, while maintaining low-power computation. As service complexity increases, the service requires high-performance computing and more space for embedded space. However, the low performance of IoT edge devices and their small memory size can hinder the complex and diverse operations of IoT services. In this paper, we propose a remote on-demand software code execution unit using the cloudification of on-chip code memory to accelerate the program execution of an IoT edge device with a low-performance processor. We propose a simulation approach to distribute remote code executed on the server side and on the edge side according to the program’s computational and communicational needs. Our on-demand remote code execution unit simulation platform, which includes an instruction set simulator based on 16-bit ARM Thumb instruction set architecture, successfully emulates the architectural behavior of on-chip flash memory, enabling embedded devices to accelerate and execute software using remote execution code in the IoT environment.
在物联网(IoT)配置的系统中,每个设备都执行片上软件。最近的物联网设备需要快速执行复杂业务,如分析大量数据,同时保持低功耗计算。随着业务复杂度的增加,业务对高性能计算的要求越来越高,对嵌入式空间的要求也越来越大。然而,物联网边缘设备的低性能和小内存可能会阻碍物联网服务的复杂和多样化操作。在本文中,我们提出了一种远程按需软件代码执行单元,使用片上代码存储器的云化来加速具有低性能处理器的物联网边缘设备的程序执行。根据程序的计算和通信需求,我们提出了一种模拟方法来分发在服务器端和边缘端执行的远程代码。我们的按需远程代码执行单元仿真平台,包括一个基于16位ARM Thumb指令集架构的指令集模拟器,成功地模拟了片上闪存的架构行为,使嵌入式设备能够在物联网环境中使用远程执行代码加速和执行软件。
{"title":"On-Demand Remote Software Code Execution Unit Using On-Chip Flash Memory Cloudification for IoT Environment Acceleration","authors":"Dongkyu Lee, Moon Gi Seok, Daejin Park","doi":"10.3745/JIPS.01.0064","DOIUrl":"https://doi.org/10.3745/JIPS.01.0064","url":null,"abstract":"In an Internet of Things (IoT)-configured system, each device executes on-chip software. Recent IoT devices require fast execution time of complex services, such as analyzing a large amount of data, while maintaining low-power computation. As service complexity increases, the service requires high-performance computing and more space for embedded space. However, the low performance of IoT edge devices and their small memory size can hinder the complex and diverse operations of IoT services. In this paper, we propose a remote on-demand software code execution unit using the cloudification of on-chip code memory to accelerate the program execution of an IoT edge device with a low-performance processor. We propose a simulation approach to distribute remote code executed on the server side and on the edge side according to the program’s computational and communicational needs. Our on-demand remote code execution unit simulation platform, which includes an instruction set simulator based on 16-bit ARM Thumb instruction set architecture, successfully emulates the architectural behavior of on-chip flash memory, enabling embedded devices to accelerate and execute software using remote execution code in the IoT environment.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130368222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Assisted Magnetic Resonance Imaging Diagnosis for Alzheimer's Disease Based on Kernel Principal Component Analysis and Supervised Classification Schemes 基于核主成分分析和监督分类方案的阿尔茨海默病磁共振成像辅助诊断
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.04.0204
Yu Wang, Wenbin Zhou, Chongchong Yu, Weijun Su
Alzheimer’s disease (AD) is an insidious and degenerative neurological disease. It is a new topic for AD patients to use magnetic resonance imaging (MRI) and computer technology and is gradually explored at present. Preprocessing and correlation analysis on MRI data are firstly made in this paper. Then kernel principal component analysis (KPCA) is used to extract features of brain gray matter images. Finally supervised classification schemes such as AdaBoost algorithm and support vector machine algorithm are used to classify the above features. Experimental results by means of AD program Alzheimer’s Disease Neuroimaging Initiative (ADNI) database which contains brain structural MRI (sMRI) of 116 AD patients, 116 patients with mild cognitive impairment, and 117 normal controls show that the proposed method can effectively assist the diagnosis and analysis of AD. Compared with principal component analysis (PCA) method, all classification results on KPCA are improved by 2%–6% among which the best result can reach 84%. It indicates that KPCA algorithm for feature extraction is more abundant and complete than PCA.
阿尔茨海默病(AD)是一种潜在的退行性神经系统疾病。磁共振成像(MRI)与计算机技术的结合是AD患者的一个新课题,目前正在逐步探索。本文首先对MRI数据进行了预处理和相关分析。然后利用核主成分分析(KPCA)提取脑灰质图像的特征。最后利用AdaBoost算法和支持向量机算法等监督分类方案对上述特征进行分类。通过AD程序Alzheimer 's Disease Neuroimaging Initiative (ADNI)数据库(包含116例AD患者、116例轻度认知障碍患者和117例正常人的脑结构MRI (sMRI))进行的实验结果表明,该方法可以有效地辅助AD的诊断和分析。与主成分分析(PCA)方法相比,KPCA的所有分类结果均提高了2% ~ 6%,其中最佳分类结果可达84%。这表明KPCA算法在特征提取方面比PCA更丰富、更完备。
{"title":"Assisted Magnetic Resonance Imaging Diagnosis for Alzheimer's Disease Based on Kernel Principal Component Analysis and Supervised Classification Schemes","authors":"Yu Wang, Wenbin Zhou, Chongchong Yu, Weijun Su","doi":"10.3745/JIPS.04.0204","DOIUrl":"https://doi.org/10.3745/JIPS.04.0204","url":null,"abstract":"Alzheimer’s disease (AD) is an insidious and degenerative neurological disease. It is a new topic for AD patients to use magnetic resonance imaging (MRI) and computer technology and is gradually explored at present. Preprocessing and correlation analysis on MRI data are firstly made in this paper. Then kernel principal component analysis (KPCA) is used to extract features of brain gray matter images. Finally supervised classification schemes such as AdaBoost algorithm and support vector machine algorithm are used to classify the above features. Experimental results by means of AD program Alzheimer’s Disease Neuroimaging Initiative (ADNI) database which contains brain structural MRI (sMRI) of 116 AD patients, 116 patients with mild cognitive impairment, and 117 normal controls show that the proposed method can effectively assist the diagnosis and analysis of AD. Compared with principal component analysis (PCA) method, all classification results on KPCA are improved by 2%–6% among which the best result can reach 84%. It indicates that KPCA algorithm for feature extraction is more abundant and complete than PCA.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"435 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125800966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An Energy Efficient Multi-hop Cluster-Head Election Strategy for Wireless Sensor Networks 一种高效的无线传感器网络多跳簇头选举策略
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.04.0202
Liquan Zhao, Shuai Guo
According to the double-phase cluster-head election method (DCE), the final cluster heads (CHs) sometimes are located at the edge of cluster. They have a long distance from the base station (BS). Sensor data is directly transmitted to BS by CHs. This makes some nodes consume much energy for transmitting data and die earlier. To address this problem, energy efficient multi-hop cluster-head election strategy (EEMCE) is proposed in this paper. To avoid taking these nodes far from BS as CH, this strategy first introduces the distance from the sensor nodes to the BS into the tentative CH election. Subsequently, in the same cluster, the energy of tentative CH is compared with those of other nodes, and then the node that has more energy than the tentative CH and being nearest the tentative CH are taken as the final CH. Lastly, if the CH is located at the periphery of the network, the multi-hop method will be employed to reduce the energy that is consumed by CHs. The simulation results suggest that the proposed method exhibits higher energy efficiency, longer stability period and better scalability than other protocols.
根据双阶段簇头选举方法(DCE),最终簇头有时位于簇的边缘。它们与基站(BS)有很长的距离。传感器数据由CHs直接传输到BS。这使得一些节点在传输数据时消耗了大量的能量,提前死亡。针对这一问题,本文提出了一种节能的多跳簇头选举策略(EEMCE)。为了避免将这些远离BS的节点作为CH,该策略首先将传感器节点到BS的距离引入到暂定性CH选举中。随后,在同一集群中,将暂定CH的能量与其他节点的能量进行比较,然后将比暂定CH能量大且距离暂定CH最近的节点作为最终CH。最后,如果CH位于网络外围,则采用多跳方法来减少CHs消耗的能量。仿真结果表明,与其他协议相比,该方法具有更高的能效、更长的稳定周期和更好的可扩展性。
{"title":"An Energy Efficient Multi-hop Cluster-Head Election Strategy for Wireless Sensor Networks","authors":"Liquan Zhao, Shuai Guo","doi":"10.3745/JIPS.04.0202","DOIUrl":"https://doi.org/10.3745/JIPS.04.0202","url":null,"abstract":"According to the double-phase cluster-head election method (DCE), the final cluster heads (CHs) sometimes are located at the edge of cluster. They have a long distance from the base station (BS). Sensor data is directly transmitted to BS by CHs. This makes some nodes consume much energy for transmitting data and die earlier. To address this problem, energy efficient multi-hop cluster-head election strategy (EEMCE) is proposed in this paper. To avoid taking these nodes far from BS as CH, this strategy first introduces the distance from the sensor nodes to the BS into the tentative CH election. Subsequently, in the same cluster, the energy of tentative CH is compared with those of other nodes, and then the node that has more energy than the tentative CH and being nearest the tentative CH are taken as the final CH. Lastly, if the CH is located at the periphery of the network, the multi-hop method will be employed to reduce the energy that is consumed by CHs. The simulation results suggest that the proposed method exhibits higher energy efficiency, longer stability period and better scalability than other protocols.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115686011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Targeted Recommendation Model for Earthquake Risk Prevention in the Whole Disaster Chain 全灾害链地震风险防范针对性推荐模型的建立
Pub Date : 2021-02-09 DOI: 10.3745/JIPS.04.0201
Xiaohui Su, Ke Ming, Xiaodong Zhang, Junming Liu, Da Lei
Strong earthquakes have caused substantial losses in recent years, and earthquake risk prevention has aroused a significant amount of attention. Earthquake risk prevention products can help improve the self and mutualrescue abilities of people, and can create convenient conditions for earthquake relief and reconstruction work. At present, it is difficult for earthquake risk prevention information systems to meet the information requirements of multiple scenarios, as they are highly specialized. Aiming at mitigating this shortcoming, this study investigates and analyzes four user roles (government users, public users, social force users, insurance market users), and summarizes their requirements for earthquake risk prevention products in the whole disaster chain, which comprises three scenarios (pre-quake preparedness, in-quake warning, and post-quake relief). A targeted recommendation rule base is then constructed based on the case analysis method. Considering the user’s location, the earthquake magnitude, and the time that has passed since the earthquake occurred, a targeted recommendation model is built. Finally, an Android APP is implemented to realize the developed model. The APP can recommend multi-form earthquake risk prevention products to users according to their requirements under the three scenarios. Taking the 2019 Lushan earthquake as an example, the APP exhibits that the model can transfer real-time information to everyone to reduce the damage caused by an earthquake.
近年来,强震造成了巨大的损失,地震风险防范引起了人们的广泛关注。防震产品有助于提高人们的自救和互救能力,为抗震救灾和重建工作创造便利条件。目前,地震风险防范信息系统专业性较强,难以满足多场景的信息需求。针对这一不足,本研究对政府用户、公众用户、社会力量用户、保险市场用户等四种用户角色进行了调查分析,总结了他们在灾前准备、震中预警、灾后救援三种场景下对地震风险防范产品的需求。基于案例分析法,构建了有针对性的推荐规则库。考虑到用户的位置、地震震级和地震发生的时间,构建了目标推荐模型。最后,通过一个Android APP来实现所开发的模型。APP可以根据用户在三种场景下的需求,向用户推荐多种形式的防震产品。以2019年芦山地震为例,APP展示了该模型可以将实时信息传递给每个人,以减少地震造成的损失。
{"title":"Development of a Targeted Recommendation Model for Earthquake Risk Prevention in the Whole Disaster Chain","authors":"Xiaohui Su, Ke Ming, Xiaodong Zhang, Junming Liu, Da Lei","doi":"10.3745/JIPS.04.0201","DOIUrl":"https://doi.org/10.3745/JIPS.04.0201","url":null,"abstract":"Strong earthquakes have caused substantial losses in recent years, and earthquake risk prevention has aroused a significant amount of attention. Earthquake risk prevention products can help improve the self and mutualrescue abilities of people, and can create convenient conditions for earthquake relief and reconstruction work. At present, it is difficult for earthquake risk prevention information systems to meet the information requirements of multiple scenarios, as they are highly specialized. Aiming at mitigating this shortcoming, this study investigates and analyzes four user roles (government users, public users, social force users, insurance market users), and summarizes their requirements for earthquake risk prevention products in the whole disaster chain, which comprises three scenarios (pre-quake preparedness, in-quake warning, and post-quake relief). A targeted recommendation rule base is then constructed based on the case analysis method. Considering the user’s location, the earthquake magnitude, and the time that has passed since the earthquake occurred, a targeted recommendation model is built. Finally, an Android APP is implemented to realize the developed model. The APP can recommend multi-form earthquake risk prevention products to users according to their requirements under the three scenarios. Taking the 2019 Lushan earthquake as an example, the APP exhibits that the model can transfer real-time information to everyone to reduce the damage caused by an earthquake.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127976320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stagewise Weak Orthogonal Matching Pursuit Algorithm Based on Adaptive Weak Threshold and Arithmetic Mean 基于自适应弱阈值和算术均值的分阶段弱正交匹配追踪算法
Pub Date : 2020-12-01 DOI: 10.3745/JIPS.03.0152
Liquan Zhao, Ke Ma
{"title":"Stagewise Weak Orthogonal Matching Pursuit Algorithm Based on Adaptive Weak Threshold and Arithmetic Mean","authors":"Liquan Zhao, Ke Ma","doi":"10.3745/JIPS.03.0152","DOIUrl":"https://doi.org/10.3745/JIPS.03.0152","url":null,"abstract":"","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115675407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Directional Interpolation Based on Improved Adaptive Residual Interpolation for Image Demosaicking 基于改进自适应残差插值的图像去马赛克方向插值
Pub Date : 2020-12-01 DOI: 10.3745/JIPS.02.0148
Chenbo Liu
{"title":"Directional Interpolation Based on Improved Adaptive Residual Interpolation for Image Demosaicking","authors":"Chenbo Liu","doi":"10.3745/JIPS.02.0148","DOIUrl":"https://doi.org/10.3745/JIPS.02.0148","url":null,"abstract":"","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116259238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Big IoT Healthcare Data Analytics Framework Based on Fog and Cloud Computing 基于雾和云计算的大物联网医疗数据分析框架
Pub Date : 2020-12-01 DOI: 10.3745/JIPS.04.0193
Hamoud H. Alshammari, Sameh Abd El-Ghany, and Abdulaziz Shehab
Throughout the world, aging populations and doctor shortages have helped drive the increasing demand for smart healthcare systems. Recently, these systems have benefited from the evolution of the Internet of Things (IoT), big data, and machine learning. However, these advances result in the generation of large amounts of data, making healthcare data analysis a major issue. These data have a number of complex properties such as high-dimensionality, irregularity, and sparsity, which makes efficient processing difficult to implement. These challenges are met by big data analytics. In this paper, we propose an innovative analytic framework for big healthcare data that are collected either from IoT wearable devices or from archived patient medical images. The proposed method would efficiently address the data heterogeneity problem using middleware between heterogeneous data sources and MapReduce Hadoop clusters. Furthermore, the proposed framework enables the use of both fog computing and cloud platforms to handle the problems faced through online and offline data processing, data storage, and data classification. Additionally, it guarantees robust and secure knowledge of patient medical data.
在全球范围内,人口老龄化和医生短缺有助于推动对智能医疗系统的需求不断增长。最近,这些系统受益于物联网(IoT)、大数据和机器学习的发展。然而,这些进步导致了大量数据的生成,使得医疗保健数据分析成为一个主要问题。这些数据具有许多复杂的属性,如高维性、不规则性和稀疏性,这使得有效的处理难以实现。大数据分析可以应对这些挑战。在本文中,我们提出了一个创新的分析框架,用于从物联网可穿戴设备或存档的患者医疗图像中收集的大医疗数据。该方法利用异构数据源和MapReduce Hadoop集群之间的中间件,有效地解决了数据异构问题。此外,所提出的框架允许使用雾计算和云平台来处理在线和离线数据处理、数据存储和数据分类所面临的问题。此外,它保证了对患者医疗数据的可靠和安全的了解。
{"title":"Big IoT Healthcare Data Analytics Framework Based on Fog and Cloud Computing","authors":"Hamoud H. Alshammari, Sameh Abd El-Ghany, and Abdulaziz Shehab","doi":"10.3745/JIPS.04.0193","DOIUrl":"https://doi.org/10.3745/JIPS.04.0193","url":null,"abstract":"Throughout the world, aging populations and doctor shortages have helped drive the increasing demand for smart healthcare systems. Recently, these systems have benefited from the evolution of the Internet of Things (IoT), big data, and machine learning. However, these advances result in the generation of large amounts of data, making healthcare data analysis a major issue. These data have a number of complex properties such as high-dimensionality, irregularity, and sparsity, which makes efficient processing difficult to implement. These challenges are met by big data analytics. In this paper, we propose an innovative analytic framework for big healthcare data that are collected either from IoT wearable devices or from archived patient medical images. The proposed method would efficiently address the data heterogeneity problem using middleware between heterogeneous data sources and MapReduce Hadoop clusters. Furthermore, the proposed framework enables the use of both fog computing and cloud platforms to handle the problems faced through online and offline data processing, data storage, and data classification. Additionally, it guarantees robust and secure knowledge of patient medical data.","PeriodicalId":415161,"journal":{"name":"J. Inf. Process. Syst.","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115372109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
J. Inf. Process. Syst.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1