首页 > 最新文献

Proceedings of the 1st International Conference on Information Science and Systems最新文献

英文 中文
Verb Based Conceptual Common Sense Extraction 基于动词的概念常识提取
Ji Youlang, Yu Yang, Z. Hongying, Zhu Jun, Gu Jingjing, Hua Lingya
The knowledge in the knowledge bases such as Freebase, Knowledge Vault and so on are all facts which record the relationships between two entities. It may lead to following two problems. First, this form of knowledge limits the scale of the existing knowledge bases. When extracting new facts, no good patterns with a good ability of summarization can be used. Second, when applied in some real tasks, the knowledge may always suffer the problem of data sparsity. To solve these two problems, in this paper, we define the problem of extracting common senses in a concept level. We evaluate our solutions on Google N-Grams data set, and the results shows a great improvement.
Freebase、knowledge Vault等知识库中的知识都是记录两个实体之间关系的事实。这可能会导致以下两个问题。首先,这种形式的知识限制了现有知识库的规模。在提取新的事实时,不能使用具有良好总结能力的好的模式。其次,当应用于实际任务时,这些知识可能总是存在数据稀疏性的问题。为了解决这两个问题,本文在概念层面定义了常识抽取问题。我们在Google N-Grams数据集上对我们的解决方案进行了评估,结果显示出了很大的改进。
{"title":"Verb Based Conceptual Common Sense Extraction","authors":"Ji Youlang, Yu Yang, Z. Hongying, Zhu Jun, Gu Jingjing, Hua Lingya","doi":"10.1145/3209914.3209941","DOIUrl":"https://doi.org/10.1145/3209914.3209941","url":null,"abstract":"The knowledge in the knowledge bases such as Freebase, Knowledge Vault and so on are all facts which record the relationships between two entities. It may lead to following two problems. First, this form of knowledge limits the scale of the existing knowledge bases. When extracting new facts, no good patterns with a good ability of summarization can be used. Second, when applied in some real tasks, the knowledge may always suffer the problem of data sparsity. To solve these two problems, in this paper, we define the problem of extracting common senses in a concept level. We evaluate our solutions on Google N-Grams data set, and the results shows a great improvement.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116805485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Combined Source Code Approach for Test Case Prioritization 测试用例优先级的组合源代码方法
Iyad Alazzam, K. Nahar
Regression testing is an activity in the software testing process to ensure the software is validated and verified after modification occurred on software. It is costly process procedure which has been expected to reach half cost of the software maintenance cost. Many techniques and approaches have been used in regression testing process to enhance the efficiency and effectiveness of regression testing process. Such as test case reduction, test case selection, test case prioritization and retest all. Test case prioritization has been used in regression testing to increase the effectiveness through selecting the most vital test case that has the ability in finding and uncovering errors in the system under test. This paper has introduced a new algorithm for prioritizing test cases in test suite that is based on method and line of code coverage. Test cases which cover the most methods and line of code are more effective and efficient in finding errors.
回归测试是软件测试过程中的一项活动,以确保在软件发生修改后对软件进行验证和验证。这是一个昂贵的过程过程,预计将达到软件维护成本的一半。为了提高回归测试过程的效率和有效性,在回归测试过程中使用了许多技术和方法。比如测试用例缩减、测试用例选择、测试用例优先级排序和重新测试。测试用例优先级已经在回归测试中使用,通过选择最重要的测试用例来提高效率,这些测试用例具有发现和发现被测系统中的错误的能力。本文介绍了一种基于方法和代码行覆盖率对测试套件中的测试用例进行优先级排序的新算法。覆盖大多数方法和代码行的测试用例在查找错误方面更有效和高效。
{"title":"Combined Source Code Approach for Test Case Prioritization","authors":"Iyad Alazzam, K. Nahar","doi":"10.1145/3209914.3209936","DOIUrl":"https://doi.org/10.1145/3209914.3209936","url":null,"abstract":"Regression testing is an activity in the software testing process to ensure the software is validated and verified after modification occurred on software. It is costly process procedure which has been expected to reach half cost of the software maintenance cost. Many techniques and approaches have been used in regression testing process to enhance the efficiency and effectiveness of regression testing process. Such as test case reduction, test case selection, test case prioritization and retest all. Test case prioritization has been used in regression testing to increase the effectiveness through selecting the most vital test case that has the ability in finding and uncovering errors in the system under test. This paper has introduced a new algorithm for prioritizing test cases in test suite that is based on method and line of code coverage. Test cases which cover the most methods and line of code are more effective and efficient in finding errors.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128264770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Study on the Management Model of Smart Tourism Industry under the Era of Big Data 大数据时代下智慧旅游产业管理模式研究
Zhao Hua
With the rapid development of Internet and the communication technology, the construction of smart tourism is no longer a slogan that can not be realized. The construction of smart tourism in tourist destinations conforms to the strategic goal of tourism industry development in China. Based on the background of big data, this paper elaborated the connotation of big data and smart tourism, and built a large data platform to realize the forecast and feedback of smart tourism through the analysis of tourism development. The platform could be divided into government tourism platform, tourists platform, tourism enterprises platform and community residents platform relying on big data do their own duty. Eventually this paper put forward a construction model and path to realize the smart tourism platform.
随着互联网和通信技术的快速发展,建设智慧旅游不再是一句无法实现的口号。在旅游目的地建设智慧旅游,符合中国旅游业发展的战略目标。本文以大数据为背景,阐述了大数据和智慧旅游的内涵,构建了大数据平台,通过对旅游发展的分析,实现对智慧旅游的预测和反馈。平台可分为政府旅游平台、游客平台、旅游企业平台和社区居民平台,依托大数据各司其职。最后提出了实现智慧旅游平台的建设模式和路径。
{"title":"A Study on the Management Model of Smart Tourism Industry under the Era of Big Data","authors":"Zhao Hua","doi":"10.1145/3209914.3234637","DOIUrl":"https://doi.org/10.1145/3209914.3234637","url":null,"abstract":"With the rapid development of Internet and the communication technology, the construction of smart tourism is no longer a slogan that can not be realized. The construction of smart tourism in tourist destinations conforms to the strategic goal of tourism industry development in China. Based on the background of big data, this paper elaborated the connotation of big data and smart tourism, and built a large data platform to realize the forecast and feedback of smart tourism through the analysis of tourism development. The platform could be divided into government tourism platform, tourists platform, tourism enterprises platform and community residents platform relying on big data do their own duty. Eventually this paper put forward a construction model and path to realize the smart tourism platform.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128882267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fixit - A Semi-Automatic Software Deployment Tool for Arbitrary Targets Fixit -针对任意目标的半自动软件部署工具
E. Loseva, A. Obeid, H. Richter, R. Backes, D. Eichhorn
The deployment of software packages becomes more and more difficult. Thus Canonical Ltd. has created a framework called "JuJu" that serves as a DevOps toolchain. JuJu allows an integrated software development, deployment and operation of software packages. Additionally Canocial provided hundreds of open-source JuJu-maintained software packages in an own online store for download. However, our tests revealed that only 14 % of 35 picked packages from the Canonical's JuJu charm store really be installed as they are. The reason is that many of them are sensitive against mismatches of what is contained in the relevant JuJu files and what exists as target hardware at the customer. Because of that, a new concept and tool called Fixit was created by us for the semi-automatic software-deployment of JuJu software packages onto arbitrary hardware and software environments such as Windows and Linux operating systems. Fixit improves the quota of successful first-try installations from 14 to 69 %. This is accomplished by semi-automatic analysis and transformation of the package source codes.
软件包的部署变得越来越困难。因此,Canonical有限公司创建了一个名为“JuJu”的框架,作为DevOps工具链。集聚软件开发、部署和运行于一体的软件包。此外,Canocial在自己的在线商店中提供了数百个由juju维护的开源软件包供下载。然而,我们的测试显示,从Canonical的JuJu魅力商店中挑选的35个软件包中,只有14%真正安装。原因是它们中的许多对相关JuJu文件中包含的内容与客户作为目标硬件存在的内容不匹配非常敏感。因此,我们创造了一个名为Fixit的新概念和工具,用于将聚聚软件包半自动部署到任意硬件和软件环境(如Windows和Linux操作系统)上。Fixit将首次尝试安装成功的配额从14%提高到69%。这是通过对包源代码的半自动分析和转换来完成的。
{"title":"Fixit - A Semi-Automatic Software Deployment Tool for Arbitrary Targets","authors":"E. Loseva, A. Obeid, H. Richter, R. Backes, D. Eichhorn","doi":"10.1145/3209914.3209938","DOIUrl":"https://doi.org/10.1145/3209914.3209938","url":null,"abstract":"The deployment of software packages becomes more and more difficult. Thus Canonical Ltd. has created a framework called \"JuJu\" that serves as a DevOps toolchain. JuJu allows an integrated software development, deployment and operation of software packages. Additionally Canocial provided hundreds of open-source JuJu-maintained software packages in an own online store for download. However, our tests revealed that only 14 % of 35 picked packages from the Canonical's JuJu charm store really be installed as they are. The reason is that many of them are sensitive against mismatches of what is contained in the relevant JuJu files and what exists as target hardware at the customer. Because of that, a new concept and tool called Fixit was created by us for the semi-automatic software-deployment of JuJu software packages onto arbitrary hardware and software environments such as Windows and Linux operating systems. Fixit improves the quota of successful first-try installations from 14 to 69 %. This is accomplished by semi-automatic analysis and transformation of the package source codes.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122730989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Big Data and Intelligent Processing Technology in Modern Chinese Multi-category Words Part of Speech Tagging Corpus 大数据与智能处理技术在现代汉语多类词词性标注语料库中的应用
Zhendong Song
The application of modern Chinese multi-category words corpus is very wide. With the development of the Internet, data from the corpus is getting bigger and bigger during collection. The data gradually develops so big that the current relational database is difficult to deal with them. This article analyzes the important role of the big data technology in corpu
现代汉语多类词语料库的应用十分广泛。随着互联网的发展,语料库中的数据在收集过程中变得越来越大。数据量越来越大,现有的关系数据库已难以处理。本文分析了大数据技术在语料库中的重要作用
{"title":"Application of Big Data and Intelligent Processing Technology in Modern Chinese Multi-category Words Part of Speech Tagging Corpus","authors":"Zhendong Song","doi":"10.1145/3209914.3234639","DOIUrl":"https://doi.org/10.1145/3209914.3234639","url":null,"abstract":"The application of modern Chinese multi-category words corpus is very wide. With the development of the Internet, data from the corpus is getting bigger and bigger during collection. The data gradually develops so big that the current relational database is difficult to deal with them. This article analyzes the important role of the big data technology in corpu","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132981502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Pornographic Images Recognition Based on Depth Learning 基于深度学习的色情图像识别应用
Ruolin Zhu, Xiaoyu Wu, Beibei Zhu, Li-hua Song
With the rapid development of the Internet, the images become the main medium of information dissemination, while the spread of pornographic images are getting more serious. Therefore, we propose a detection method of pornographic images based on a combination of global and local features. Considering the NPDI database's defective both in quality and quantity, so this paper constructs new database CUC_NSFW (Not Suitable for Work) applying data augmentation methods to improve the classification performance. Pornographic images with only exposed sensitive organs become the bottleneck of improving model recall ratio. We design a sensitive organs detection module, cascaded behind the residual network assisting the recognition of pornography images. And our method makes a good performance based on the research work of pornographic image detection
随着互联网的快速发展,图像成为信息传播的主要媒介,而色情图像的传播也日益严重。因此,我们提出了一种基于全局特征和局部特征相结合的色情图像检测方法。针对NPDI数据库在数量和质量上的缺陷,本文采用数据增强方法构建了新的数据库CUC_NSFW (Not Suitable for Work),以提高分类性能。仅暴露敏感器官的色情图像成为提高模型召回率的瓶颈。我们设计了一个敏感器官检测模块,级联在残差网络后面,帮助识别色情图像。通过对色情图像检测的研究,该方法取得了较好的效果
{"title":"Application of Pornographic Images Recognition Based on Depth Learning","authors":"Ruolin Zhu, Xiaoyu Wu, Beibei Zhu, Li-hua Song","doi":"10.1145/3209914.3209946","DOIUrl":"https://doi.org/10.1145/3209914.3209946","url":null,"abstract":"With the rapid development of the Internet, the images become the main medium of information dissemination, while the spread of pornographic images are getting more serious. Therefore, we propose a detection method of pornographic images based on a combination of global and local features. Considering the NPDI database's defective both in quality and quantity, so this paper constructs new database CUC_NSFW (Not Suitable for Work) applying data augmentation methods to improve the classification performance. Pornographic images with only exposed sensitive organs become the bottleneck of improving model recall ratio. We design a sensitive organs detection module, cascaded behind the residual network assisting the recognition of pornography images. And our method makes a good performance based on the research work of pornographic image detection","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123980857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Signal Recovering Based on Fourier Analysis from Nonuniform Samples 基于傅立叶分析的非均匀样本信号恢复
Yukai Gao
This paper describes a new approach that can process nonuniformly sampled signals efficiently, in the sense the digital spectrum and signal recovering from nonuniformly sampled signal can be derived precisely. The results of research on estimation spectra of signals whose samples were taken at randomly distributed sampling instants are presented. The paper determines the conditions under which a discrete Fourier transformation spectrum estimator provides an unbiased approximation of the spectrum of the original continuous-time signal in an unlimited range of frequencies. In the research nonuniformly sampled signals is represented as the algebraic addition of impulse function. Based on the random theory, the expected value is obtained. The signal is recovered by the inverse Fourier transformation.
本文介绍了一种有效处理非均匀采样信号的新方法,可以精确地导出非均匀采样信号的数字频谱和恢复信号。给出了随机分布采样时刻信号估计谱的研究结果。本文确定了离散傅立叶变换谱估计器在无限频率范围内对原始连续时间信号的谱提供无偏逼近的条件。在研究中,非均匀采样信号被表示为脉冲函数的代数加法。根据随机理论,得到期望值。信号通过傅里叶反变换恢复。
{"title":"Signal Recovering Based on Fourier Analysis from Nonuniform Samples","authors":"Yukai Gao","doi":"10.1145/3209914.3234641","DOIUrl":"https://doi.org/10.1145/3209914.3234641","url":null,"abstract":"This paper describes a new approach that can process nonuniformly sampled signals efficiently, in the sense the digital spectrum and signal recovering from nonuniformly sampled signal can be derived precisely. The results of research on estimation spectra of signals whose samples were taken at randomly distributed sampling instants are presented. The paper determines the conditions under which a discrete Fourier transformation spectrum estimator provides an unbiased approximation of the spectrum of the original continuous-time signal in an unlimited range of frequencies. In the research nonuniformly sampled signals is represented as the algebraic addition of impulse function. Based on the random theory, the expected value is obtained. The signal is recovered by the inverse Fourier transformation.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116741415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving RealSense by Fusing Color Stereo Vision and Infrared Stereo Vision for the Visually Impaired 通过融合彩色立体视觉和红外立体视觉来改善视障人士的真实感
Hao Chen, Kaiwei Wang, Kailun Yang
The introduction of RGB-D sensor has attracted attention from researchers majored in computer vision. With real-time depth measurement provided by RGB-D sensor, a better navigational assistance than traditional aiding tools can be offered for visually impaired people. However, nowadays RGB-D sensor usually has a limited detecting range, and fails in performing depth measurement on objects with special surfaces, such as absorbing, specular, and transparent surfaces. In this paper, a novel algorithm using two RealSense R200 simultaneously to build a short-baseline color stereo vision system is developed. This algorithm enhances depth estimation by fusing color stereo depth map and original RealSense depth map, which is obtained by infrared stereo vision. Moreover, the minimum range is decreased by up to 84.6%, from 650mm to 100mm. We anticipate out algorithm to provide better assistance for visually impaired individuals.
RGB-D传感器的引入引起了计算机视觉专业研究人员的关注。通过RGB-D传感器提供的实时深度测量,可以为视障人士提供比传统辅助工具更好的导航辅助。然而,目前RGB-D传感器的探测范围有限,无法对具有特殊表面的物体进行深度测量,如吸收面、镜面、透明面等。本文提出了一种利用两台RealSense R200同时构建短基线彩色立体视觉系统的新算法。该算法通过融合彩色立体深度图和红外立体视觉获得的原始RealSense深度图来增强深度估计。此外,最小范围减少了84.6%,从650mm到100mm。我们期望我们的算法能为视障人士提供更好的帮助。
{"title":"Improving RealSense by Fusing Color Stereo Vision and Infrared Stereo Vision for the Visually Impaired","authors":"Hao Chen, Kaiwei Wang, Kailun Yang","doi":"10.1145/3209914.3209944","DOIUrl":"https://doi.org/10.1145/3209914.3209944","url":null,"abstract":"The introduction of RGB-D sensor has attracted attention from researchers majored in computer vision. With real-time depth measurement provided by RGB-D sensor, a better navigational assistance than traditional aiding tools can be offered for visually impaired people. However, nowadays RGB-D sensor usually has a limited detecting range, and fails in performing depth measurement on objects with special surfaces, such as absorbing, specular, and transparent surfaces. In this paper, a novel algorithm using two RealSense R200 simultaneously to build a short-baseline color stereo vision system is developed. This algorithm enhances depth estimation by fusing color stereo depth map and original RealSense depth map, which is obtained by infrared stereo vision. Moreover, the minimum range is decreased by up to 84.6%, from 650mm to 100mm. We anticipate out algorithm to provide better assistance for visually impaired individuals.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"83 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114087672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An Adaptive Scheduling Algorithm for Multi-Priority Traffic in Load-Balanced Switch 负载均衡交换机中多优先级流量的自适应调度算法
Ya Gao
In this paper, a hierarchical weighted round-robin scheduling (HWRR) algorithm is proposed in Load-Balanced two-stage switches. Following the change of traffic load, the proposed algorithm can adjust scheduling weights and allocate bandwidth flexibility. Intensive dispatching will be performed when heavy load is detected. It can reduce the packet drop probability and meet the requirements of different priority traffic. The simulation results show that HWRR achieves stable performance on delay and cell drop probability, and performs well under several types of traffic.
提出了一种负载均衡两级交换机的分级加权轮循调度算法。该算法可以根据业务负载的变化调整调度权值,实现带宽的灵活分配。当检测到大负荷时,将进行密集调度。它可以降低丢包概率,满足不同优先级流量的需求。仿真结果表明,HWRR在时延和小区丢包概率上都有稳定的性能,在多种流量下都表现良好。
{"title":"An Adaptive Scheduling Algorithm for Multi-Priority Traffic in Load-Balanced Switch","authors":"Ya Gao","doi":"10.1145/3209914.3226167","DOIUrl":"https://doi.org/10.1145/3209914.3226167","url":null,"abstract":"In this paper, a hierarchical weighted round-robin scheduling (HWRR) algorithm is proposed in Load-Balanced two-stage switches. Following the change of traffic load, the proposed algorithm can adjust scheduling weights and allocate bandwidth flexibility. Intensive dispatching will be performed when heavy load is detected. It can reduce the packet drop probability and meet the requirements of different priority traffic. The simulation results show that HWRR achieves stable performance on delay and cell drop probability, and performs well under several types of traffic.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"242 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124673821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
'Pandora': A multi-encryption software “潘多拉”:一种多重加密软件
Jayson J. Cruz, R. D. Fernandez, Carlo M. Palicpic, Dominick L. Uyehara, Ronina C. Tayuan
Privacy and security have never been more important. The need for these things are on the rise considering that more and more reports of digital theft and massive unsolicited government surveillance are surfacing. The project, Pandora, is a solution created by the proponents to provide an encryption scheme that is a combination of the new and the old. Pandora is an integration of AES-256 and modified ciphers. You can use it to encrypt and decrypt files, just like typical AES encryption, but with the added complexity offered by the ciphers. The system is primarily developed using Java programming language using NetBeans IDE. The program will ask for a file input which will then be processed through multi-level encryption. Running parallel with this, the system will ask for a key which will be used for decryption. The final output will be a ".pxt file". All of these are presented to a user with a graphical user interface (GUI). Added security elements were also implemented to ensure the confidentiality of the files. The key is required to be between eight to thirty-two mixed alphanumeric characters which will then be processed through separate hashing algorithm. This scheme ensures that it would take at least 3 to 3.914349685892112e+43 years to brute force or to perform dictionary attack on the key.
隐私和安全从未像现在这样重要。考虑到越来越多关于数字盗窃和大规模未经请求的政府监控的报道浮出水面,对这些东西的需求正在上升。这个项目,潘多拉,是一个由支持者创建的解决方案,提供了一个新的和旧的结合的加密方案。潘多拉是AES-256和修改密码的集成。您可以使用它来加密和解密文件,就像典型的AES加密一样,但是密码提供了额外的复杂性。本系统主要采用Java编程语言和NetBeans IDE进行开发。该程序将要求文件输入,然后通过多级加密进行处理。与此同时,系统将请求用于解密的密钥。最后的输出将是一个。pxt文件”。所有这些都通过图形用户界面(GUI)呈现给用户。此外,还增加了保安措施,以确保文件的机密性。密钥需要在8到32个混合字母数字字符之间,然后通过单独的散列算法进行处理。该方案确保对密钥进行暴力破解或字典攻击至少需要3到3.914349685892112e+43年。
{"title":"'Pandora': A multi-encryption software","authors":"Jayson J. Cruz, R. D. Fernandez, Carlo M. Palicpic, Dominick L. Uyehara, Ronina C. Tayuan","doi":"10.1145/3209914.3209919","DOIUrl":"https://doi.org/10.1145/3209914.3209919","url":null,"abstract":"Privacy and security have never been more important. The need for these things are on the rise considering that more and more reports of digital theft and massive unsolicited government surveillance are surfacing. The project, Pandora, is a solution created by the proponents to provide an encryption scheme that is a combination of the new and the old. Pandora is an integration of AES-256 and modified ciphers. You can use it to encrypt and decrypt files, just like typical AES encryption, but with the added complexity offered by the ciphers. The system is primarily developed using Java programming language using NetBeans IDE. The program will ask for a file input which will then be processed through multi-level encryption. Running parallel with this, the system will ask for a key which will be used for decryption. The final output will be a \".pxt file\". All of these are presented to a user with a graphical user interface (GUI). Added security elements were also implemented to ensure the confidentiality of the files. The key is required to be between eight to thirty-two mixed alphanumeric characters which will then be processed through separate hashing algorithm. This scheme ensures that it would take at least 3 to 3.914349685892112e+43 years to brute force or to perform dictionary attack on the key.","PeriodicalId":174382,"journal":{"name":"Proceedings of the 1st International Conference on Information Science and Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129912142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 1st International Conference on Information Science and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1