首页 > 最新文献

2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence最新文献

英文 中文
Design of low power magnitude comparator 低功率量级比较器的设计
Akash Gupta, Manohar Khatri, S. Rajput, Anu Mehra, S. Bathla
A low power two bit magnitude comparator has been proposed in the present work. The proposed magnitude comparator using the technology of coupling has been compared with the basic comparator circuit. The performance analysis of both the different comparators has been done for power consumption, delay and power delay-product (PDP) with VDD sweep. The simulations are carried on Mentor graphics (ELDO Spice) using 90nm CMOS technology at 1 V supply. The simulation results of the coupled magnitude comparator circuits is in good agreement in terms of power consumption at percentage of 60.26% in greater than function and 56.14% in lesser than f unction and 59.48% in equals to function comparators.
本文提出了一种低功耗的2位幅度比较器。采用耦合技术的幅度比较器与基本比较器电路进行了比较。对这两种比较器进行了功耗、延迟和VDD扫描的功率延迟积(PDP)的性能分析。模拟在Mentor graphics (ELDO Spice)上进行,采用90nm CMOS技术,1v电源。耦合幅度比较器电路的仿真结果在功耗方面是一致的,大于函数的占60.26%,小于函数的占56.14%,等于函数比较器的占59.48%。
{"title":"Design of low power magnitude comparator","authors":"Akash Gupta, Manohar Khatri, S. Rajput, Anu Mehra, S. Bathla","doi":"10.1109/CONFLUENCE.2017.7943251","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943251","url":null,"abstract":"A low power two bit magnitude comparator has been proposed in the present work. The proposed magnitude comparator using the technology of coupling has been compared with the basic comparator circuit. The performance analysis of both the different comparators has been done for power consumption, delay and power delay-product (PDP) with VDD sweep. The simulations are carried on Mentor graphics (ELDO Spice) using 90nm CMOS technology at 1 V supply. The simulation results of the coupled magnitude comparator circuits is in good agreement in terms of power consumption at percentage of 60.26% in greater than function and 56.14% in lesser than f unction and 59.48% in equals to function comparators.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"25 1","pages":"754-758"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81423086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-modal travel in India: A big data approach for policy analytics 印度的多式联运:政策分析的大数据方法
Hari Bhaskar Sankaranarayanan, Ravish Singh Thind
Multi-modal travel is becoming prominent amongst Indian Passengers due to the advance of low-cost air travel, increasing disposable income, and connectivity by rail, bus, and air across various cities. This is a huge opportunity for all stakeholders within transport sector like Rail, Aviation, and Surface transport to operate seamlessly to boost domestic transportation and ultimately offer passengers the best of breed travel solution. In this paper, we will propose a framework for policy analytics for Rail and Air connectivity and discuss how big data can play a key role to analyze the existing datasets like routes, schedules, booking information, benchmark studies, economic characteristics, and passenger demographics. Big data tools are very useful in processing unstructured data sets by analyzing them and providing meaningful visualizations. Policy analytics can combine the power of information technology, operations research, statistical modeling and machine learning to modernize and equip policy makers for better data-driven decisions while drafting policies. This would ultimately enable Government's vision on smart cities, seamless transport hubs, and interchanges that provide seamless connectivity and high passenger satisfaction.
由于低成本航空旅行的发展,可支配收入的增加,以及各个城市之间的铁路、公共汽车和航空连接,多式联运在印度乘客中变得越来越突出。对于铁路、航空和地面运输等运输行业的所有利益相关者来说,这是一个巨大的机会,可以无缝运营,促进国内运输,并最终为乘客提供最佳的旅行解决方案。在本文中,我们将提出一个铁路和航空连通性政策分析框架,并讨论大数据如何在分析现有数据集(如路线、时刻表、预订信息、基准研究、经济特征和乘客人口统计数据)方面发挥关键作用。通过分析和提供有意义的可视化,大数据工具在处理非结构化数据集方面非常有用。政策分析可以结合信息技术、运筹学、统计建模和机器学习的力量,使政策制定者在起草政策时实现现代化,并为他们提供更好的数据驱动决策。这将最终实现政府对智慧城市、无缝交通枢纽和交汇处的愿景,提供无缝连接和高乘客满意度。
{"title":"Multi-modal travel in India: A big data approach for policy analytics","authors":"Hari Bhaskar Sankaranarayanan, Ravish Singh Thind","doi":"10.1109/CONFLUENCE.2017.7943157","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943157","url":null,"abstract":"Multi-modal travel is becoming prominent amongst Indian Passengers due to the advance of low-cost air travel, increasing disposable income, and connectivity by rail, bus, and air across various cities. This is a huge opportunity for all stakeholders within transport sector like Rail, Aviation, and Surface transport to operate seamlessly to boost domestic transportation and ultimately offer passengers the best of breed travel solution. In this paper, we will propose a framework for policy analytics for Rail and Air connectivity and discuss how big data can play a key role to analyze the existing datasets like routes, schedules, booking information, benchmark studies, economic characteristics, and passenger demographics. Big data tools are very useful in processing unstructured data sets by analyzing them and providing meaningful visualizations. Policy analytics can combine the power of information technology, operations research, statistical modeling and machine learning to modernize and equip policy makers for better data-driven decisions while drafting policies. This would ultimately enable Government's vision on smart cities, seamless transport hubs, and interchanges that provide seamless connectivity and high passenger satisfaction.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"PP 1","pages":"243-248"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84301131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A survey on driver behavior detection techniques for intelligent transportation systems 智能交通系统驾驶员行为检测技术综述
Rishu Chhabra, S. Verma, C. Krishna
Driver behavior is an essential component of the driver-vehicle-environment system and plays a key role in the design of the transport and vehicle systems in order to improve the efficiency and safety of human agility. The most important factors that influence driver behavior are the environment, vehicle and the driver itself. Experience, distraction, fatigue, drowsiness etc. are so me of the other factors that have an impact on driver behavior. Improper driving behavior is the leading cause of the accidents and thus, detection of driver behavior is an emerging area of research interest. This paper discusses the various techniques used for monitoring driver behavior and also classifies them into real-time and non-real time techniques. A comparative analysis was performed on the basis of advantages, disadvantages and methodology applied by various techniques for detecting driver's behavior for Intelligent Transportation Systems (ITS).
驾驶员行为是人-车-环境系统的重要组成部分,在交通系统和车辆系统的设计中起着关键作用,以提高人类敏捷性的效率和安全性。影响驾驶员行为的最重要因素是环境、车辆和驾驶员本身。经验、分心、疲劳、困倦等都是影响司机行为的其他因素。不当驾驶行为是导致交通事故的主要原因,因此,对驾驶员行为的检测是一个新兴的研究领域。本文讨论了用于监控驾驶员行为的各种技术,并将其分为实时技术和非实时技术。在智能交通系统(ITS)中检测驾驶员行为的各种技术的优缺点和方法的基础上进行了比较分析。
{"title":"A survey on driver behavior detection techniques for intelligent transportation systems","authors":"Rishu Chhabra, S. Verma, C. Krishna","doi":"10.1109/CONFLUENCE.2017.7943120","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943120","url":null,"abstract":"Driver behavior is an essential component of the driver-vehicle-environment system and plays a key role in the design of the transport and vehicle systems in order to improve the efficiency and safety of human agility. The most important factors that influence driver behavior are the environment, vehicle and the driver itself. Experience, distraction, fatigue, drowsiness etc. are so me of the other factors that have an impact on driver behavior. Improper driving behavior is the leading cause of the accidents and thus, detection of driver behavior is an emerging area of research interest. This paper discusses the various techniques used for monitoring driver behavior and also classifies them into real-time and non-real time techniques. A comparative analysis was performed on the basis of advantages, disadvantages and methodology applied by various techniques for detecting driver's behavior for Intelligent Transportation Systems (ITS).","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"97 1","pages":"36-41"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86661521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Deadline constrained scheduling of scientific workflows on cloud using hybrid genetic algorithm 基于混合遗传算法的云上科学工作流限期调度
Gursleen Kaur, Mala Kalra
Workflows have simplified the execution of complex large scale scientific applications. The cloud acts as an ideal paradigm for executing them but with many open challenges that need to be addressed for an effective workflow scheduling. Several algorithms have been proposed for workflow scheduling, but most of them fail to incorporate the key features of cloud like heterogeneous resources, pay-per-usage model, and elasticity along with the Quality of service (QoS) requirements. This paper proposes a hybrid genetic algorithm which uses the PEFT generated schedule as a seed with the aim to minimize cost while keeping execution time below the given deadline. A good seed helps to accelerate the process of obtaining an optimal solution. The algorithm is simulated on WorkflowSim and is evaluated using various scientific realistic workflows of different sizes. The experimental results validate that our approach performs better than various state of the art algorithms.
工作流简化了复杂的大规模科学应用程序的执行。云是执行工作流的理想范例,但要实现有效的工作流调度,还需要解决许多尚未解决的挑战。已经提出了几种用于工作流调度的算法,但大多数算法都没有结合云的关键特性,如异构资源、按使用付费模型、弹性以及服务质量(QoS)要求。本文提出了一种以PEFT生成的调度作为种子的混合遗传算法,其目的是在保证执行时间低于给定期限的情况下,使成本最小化。好的种子有助于加速获得最优解的过程。该算法在WorkflowSim上进行了仿真,并使用各种不同规模的科学现实工作流进行了评估。实验结果验证了我们的方法比各种先进的算法性能更好。
{"title":"Deadline constrained scheduling of scientific workflows on cloud using hybrid genetic algorithm","authors":"Gursleen Kaur, Mala Kalra","doi":"10.1109/CONFLUENCE.2017.7943162","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943162","url":null,"abstract":"Workflows have simplified the execution of complex large scale scientific applications. The cloud acts as an ideal paradigm for executing them but with many open challenges that need to be addressed for an effective workflow scheduling. Several algorithms have been proposed for workflow scheduling, but most of them fail to incorporate the key features of cloud like heterogeneous resources, pay-per-usage model, and elasticity along with the Quality of service (QoS) requirements. This paper proposes a hybrid genetic algorithm which uses the PEFT generated schedule as a seed with the aim to minimize cost while keeping execution time below the given deadline. A good seed helps to accelerate the process of obtaining an optimal solution. The algorithm is simulated on WorkflowSim and is evaluated using various scientific realistic workflows of different sizes. The experimental results validate that our approach performs better than various state of the art algorithms.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"151 1","pages":"276-280"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86664533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Implementation of object oriented software engineering on LabVIEW graphical design framework for data acquisition in large volume plasma device 面向对象软件工程在LabVIEW图形设计框架上的实现,用于大容量等离子体设备数据采集
R. Sugandhi, P. Srivastava, P. Srivastav, A. Sanyasi, L. M. Awasthi, Vijaysinh Parmar, Keyur Makadia, Ishan Patel, Sandeep Shah
The data acquisition and control system (DACS) implementation for laboratory plasma experiments is a challenging task, develops gradually over time due to the: (a) rapidly evolving requirements driven by the new findings, (b) application of new ideas to the experiments, (c) interaction of the software with the specialized hardware and (d) time scales of measurement and controls. This motivates development of software based on flexible and modular architecture for the scientific computing. We have broadly classified it as: (a) base design dealing with specialized measurement hardware and (b) application design for system testing and experimentation. The role of object oriented software engineering (OOSE) is important so that developed software components could be effectively utilized by applications. The OOSE on LabVIEW graphical programming platform is a new and evolving paradigm. A demonstration of it, is achieved in Large Volume Plasma Device (LVPD) utilizing high speed PXIe bus based instrumentation using hybrid approach of OOSE and data flow programming. The LVPD is a pulsed plasma device involved in pursuing investigations ranging from excitation of wave packets of whistler time scales, relevant to space plasmas to understanding of plasma instability and transport due to electron temperature gradient (ETG) driven turbulence, relevant for fusion plasmas. The development of DACS effectively handles high acquisition cards on PXIe bus, data streaming, high channel count system design and synchronized behavior on the backplane bus. Application development include development of applications highlighting pulsed operation and data visualization including development of oscilloscope for raw and process data visualization. This paper will discuss the requirements, object oriented design, development, testing, results and lessons learned from this initiative.
实验室等离子体实验的数据采集和控制系统(DACS)的实现是一项具有挑战性的任务,随着时间的推移逐渐发展,因为:(a)新发现驱动的快速发展的需求,(b)新思想在实验中的应用,(c)软件与专用硬件的交互,以及(d)测量和控制的时间尺度。这激发了基于灵活模块化体系结构的科学计算软件的开发。我们大致将其分为:(a)处理专门测量硬件的基础设计和(b)用于系统测试和实验的应用程序设计。面向对象软件工程(OOSE)的作用非常重要,因此开发的软件组件可以被应用程序有效地利用。基于LabVIEW图形化编程平台的OOSE是一种不断发展的新范式。利用基于高速PXIe总线的仪器,采用OOSE和数据流编程的混合方法,在大体积等离子体器件(LVPD)中实现了该方法的演示。LVPD是一种脉冲等离子体装置,用于研究与空间等离子体相关的哨声时间尺度波包的激发,以及与聚变等离子体相关的由电子温度梯度(ETG)驱动的湍流引起的等离子体不稳定性和输运。DACS的开发有效地处理了PXIe总线上的高采集卡、数据流、高通道数系统设计和背板总线上的同步行为。应用开发包括开发突出脉冲操作和数据可视化的应用,包括开发用于原始和过程数据可视化的示波器。本文将讨论需求、面向对象的设计、开发、测试、结果以及从这个项目中学到的经验教训。
{"title":"Implementation of object oriented software engineering on LabVIEW graphical design framework for data acquisition in large volume plasma device","authors":"R. Sugandhi, P. Srivastava, P. Srivastav, A. Sanyasi, L. M. Awasthi, Vijaysinh Parmar, Keyur Makadia, Ishan Patel, Sandeep Shah","doi":"10.1109/CONFLUENCE.2017.7943259","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943259","url":null,"abstract":"The data acquisition and control system (DACS) implementation for laboratory plasma experiments is a challenging task, develops gradually over time due to the: (a) rapidly evolving requirements driven by the new findings, (b) application of new ideas to the experiments, (c) interaction of the software with the specialized hardware and (d) time scales of measurement and controls. This motivates development of software based on flexible and modular architecture for the scientific computing. We have broadly classified it as: (a) base design dealing with specialized measurement hardware and (b) application design for system testing and experimentation. The role of object oriented software engineering (OOSE) is important so that developed software components could be effectively utilized by applications. The OOSE on LabVIEW graphical programming platform is a new and evolving paradigm. A demonstration of it, is achieved in Large Volume Plasma Device (LVPD) utilizing high speed PXIe bus based instrumentation using hybrid approach of OOSE and data flow programming. The LVPD is a pulsed plasma device involved in pursuing investigations ranging from excitation of wave packets of whistler time scales, relevant to space plasmas to understanding of plasma instability and transport due to electron temperature gradient (ETG) driven turbulence, relevant for fusion plasmas. The development of DACS effectively handles high acquisition cards on PXIe bus, data streaming, high channel count system design and synchronized behavior on the backplane bus. Application development include development of applications highlighting pulsed operation and data visualization including development of oscilloscope for raw and process data visualization. This paper will discuss the requirements, object oriented design, development, testing, results and lessons learned from this initiative.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"88 1","pages":"798-803"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81382218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Dynamic service composition towards database virtualization for efficient data management 面向数据库虚拟化的动态服务组合,实现高效的数据管理
Anshuk Dubey, S. Pal
Automated data centric technology of Cloud computing facilitate the end users through the service module, named SaaS. Where, these group of end users are either be skilled or unskilled. Recently, the most intellectual decision is to retrieve the requested data from the enormous flooded data storage through the service based cloud architecture by any type of cloud users through the remarkably efficient way using the methodologies like DBaaS, multi-tenancy, database integration. Among them, multi-tenancy and database integration can be applicable in the SaaS service model through the tightly coupled nature of service composition. But, this static service composition suffers from implementation complexity, cost factor, flexibility and scalability for further database adaptability and efficient data availability. Here, the proposed Dynamic Service Composition (abbreviated as DSC) methodology is sophisticated enough to retrieve different types of data from the multiple heterogeneous cloud databases after connectivity setup with new databases at runtime and on-demand basis. This dynamic database connectivity through the loosely coupled service composition is able to supply the requested data within a revolutionary computational speed. This methodology is able to overcome the challenges introduced by static service composition. DSC can govern multiple cloud databases through the flexible services connectivity without any information about their position in the cloud. This concept can be termed as database virtualization. Overall, the proposed DSC mechanism can monitor heterogeneous cloud databases and is responsible for significant growth over computational power for efficient data availability within a remarkable lower cost in a flexible and scalable way.
云计算以数据为中心的自动化技术,通过SaaS服务模块为最终用户提供便利。其中,这些最终用户组要么是熟练的,要么是不熟练的。最近,最明智的决策是,通过基于服务的云架构,任何类型的云用户都可以通过使用DBaaS、多租户、数据库集成等方法的非常有效的方式,从大量泛滥的数据存储中检索所请求的数据。其中,通过服务组合的紧密耦合特性,多租户和数据库集成可以应用于SaaS服务模型。但是,这种静态服务组合存在实现复杂性、成本因素、灵活性和可伸缩性等问题,无法实现进一步的数据库适应性和有效的数据可用性。在这里,建议的动态服务组合(简称为DSC)方法非常复杂,可以在运行时和按需基础上与新数据库建立连接后,从多个异构云数据库检索不同类型的数据。这种通过松散耦合的服务组合实现的动态数据库连接能够以革命性的计算速度提供所请求的数据。这种方法能够克服静态服务组合带来的挑战。DSC可以通过灵活的服务连接来管理多个云数据库,而不需要任何关于它们在云中位置的信息。这个概念可以称为数据库虚拟化。总的来说,所建议的DSC机制可以监控异构云数据库,并负责以灵活和可扩展的方式以显着降低的成本实现高效数据可用性的计算能力的显著增长。
{"title":"Dynamic service composition towards database virtualization for efficient data management","authors":"Anshuk Dubey, S. Pal","doi":"10.1109/CONFLUENCE.2017.7943206","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943206","url":null,"abstract":"Automated data centric technology of Cloud computing facilitate the end users through the service module, named SaaS. Where, these group of end users are either be skilled or unskilled. Recently, the most intellectual decision is to retrieve the requested data from the enormous flooded data storage through the service based cloud architecture by any type of cloud users through the remarkably efficient way using the methodologies like DBaaS, multi-tenancy, database integration. Among them, multi-tenancy and database integration can be applicable in the SaaS service model through the tightly coupled nature of service composition. But, this static service composition suffers from implementation complexity, cost factor, flexibility and scalability for further database adaptability and efficient data availability. Here, the proposed Dynamic Service Composition (abbreviated as DSC) methodology is sophisticated enough to retrieve different types of data from the multiple heterogeneous cloud databases after connectivity setup with new databases at runtime and on-demand basis. This dynamic database connectivity through the loosely coupled service composition is able to supply the requested data within a revolutionary computational speed. This methodology is able to overcome the challenges introduced by static service composition. DSC can govern multiple cloud databases through the flexible services connectivity without any information about their position in the cloud. This concept can be termed as database virtualization. Overall, the proposed DSC mechanism can monitor heterogeneous cloud databases and is responsible for significant growth over computational power for efficient data availability within a remarkable lower cost in a flexible and scalable way.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"61 1","pages":"519-526"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82672039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A survey on brain tumor detection using image processing techniques 图像处理技术在脑肿瘤检测中的应用综述
Luxit Kapoor, Sanjeev Thakur
Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.
生物医学图像处理是一个不断发展和要求很高的领域。它包括许多不同类型的成像方法,如CT扫描,x射线和MRI。这些技术使我们能够识别人体内哪怕是最小的异常。医学成像的主要目标是从这些图像中以最小的误差提取有意义和准确的信息。在我们可用的各种医学成像过程中,核磁共振成像是最可靠和安全的。它不涉及将身体暴露在任何有害的辐射中。然后可以对MRI进行处理,并对肿瘤进行分割。肿瘤分割包括使用几种不同的技术。从MRI中检测脑肿瘤的整个过程可以分为预处理、分割、优化和特征提取四大类。这项调查包括审查其他专业人士的研究,并将其汇编成一篇论文。
{"title":"A survey on brain tumor detection using image processing techniques","authors":"Luxit Kapoor, Sanjeev Thakur","doi":"10.1109/CONFLUENCE.2017.7943218","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943218","url":null,"abstract":"Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"24 1","pages":"582-585"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89457796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Computational intelligence based approaches to software reliability 基于计算智能的软件可靠性方法
Tamanna, O. Sangwan
Accurate software reliability prediction with a single universal software reliability growth model is very difficult. In this ρ aper we reviewed different models which uses computational intelligence for the prediction purpose and describe how these techniques outperform conventional statistical models. Parameters, efficacy measures with methodologies are concluded in tabular form.
采用单一的通用软件可靠性增长模型进行准确的软件可靠性预测是非常困难的。在这篇论文中,我们回顾了使用计算智能进行预测的不同模型,并描述了这些技术如何优于传统的统计模型。以表格形式总结了参数、疗效指标和方法。
{"title":"Computational intelligence based approaches to software reliability","authors":"Tamanna, O. Sangwan","doi":"10.1109/CONFLUENCE.2017.7943144","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943144","url":null,"abstract":"Accurate software reliability prediction with a single universal software reliability growth model is very difficult. In this ρ aper we reviewed different models which uses computational intelligence for the prediction purpose and describe how these techniques outperform conventional statistical models. Parameters, efficacy measures with methodologies are concluded in tabular form.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"108 1","pages":"171-176"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79580209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recommendation generation using typicality based collaborative filtering 使用基于典型性的协同过滤生成推荐
Sharandeep Kaur, R. Challa, Naveen Kumar, Shano Solanki, Shalini Sharma, Khushleen Kaur
The rapid growth of information availability on the Web related to movies, news, books, hotels, medicines, jobs etc. have increased the scope of information filtering techniques. Recommender System is software application that uses filtering techniques and algorithms to generate personalized preferences to support decision making of the users. Collaborative Filtering is one type of recommender system that finds neighbors of users on the basis of similar rated items by users or common users of items. It suffers from data sparsity and inaccuracy issues. In this paper, concept of typicality from cognitive psychology is used to find the neighbors of users on the basis of on their typicality degree in user groups. Typicality based Collaborative Filtering (TyCo) approach using K-means and Topic model based clustering is compared in terms of Mean Absolute Error (MAE).
网络上的电影、新闻、书籍、酒店、药品、工作等信息的快速增长,增加了信息过滤技术的范围。推荐系统是一种软件应用程序,它使用过滤技术和算法来生成个性化的偏好,以支持用户的决策。协同过滤是一种推荐系统,它根据用户或物品的共同用户的相似评价来找到用户的邻居。它存在数据稀疏性和不准确性问题。本文利用认知心理学中的典型性概念,根据用户在用户群体中的典型性程度来寻找用户的邻居。在平均绝对误差(MAE)方面,比较了基于K-means的典型性协同过滤(TyCo)方法和基于主题模型的聚类方法。
{"title":"Recommendation generation using typicality based collaborative filtering","authors":"Sharandeep Kaur, R. Challa, Naveen Kumar, Shano Solanki, Shalini Sharma, Khushleen Kaur","doi":"10.1109/CONFLUENCE.2017.7943151","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943151","url":null,"abstract":"The rapid growth of information availability on the Web related to movies, news, books, hotels, medicines, jobs etc. have increased the scope of information filtering techniques. Recommender System is software application that uses filtering techniques and algorithms to generate personalized preferences to support decision making of the users. Collaborative Filtering is one type of recommender system that finds neighbors of users on the basis of similar rated items by users or common users of items. It suffers from data sparsity and inaccuracy issues. In this paper, concept of typicality from cognitive psychology is used to find the neighbors of users on the basis of on their typicality degree in user groups. Typicality based Collaborative Filtering (TyCo) approach using K-means and Topic model based clustering is compared in terms of Mean Absolute Error (MAE).","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"86 1","pages":"210-215"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79648711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Survey of performance modeling of big data applications 大数据应用性能建模研究综述
T. Pattanshetti, V. Attar
Enormous amount of data is being generated at a tremendous rate by multiple sources, often this data exists in different formats thus making it quite difficult to process the data using traditional methods. The platforms used for processing this type of data rely on distributed architecture like Cloud computing, Hadoop etc. The processing of big data can be efficiently carried out by exploring the characteristics of underlying platforms. With the advent of efficient algorithms, software metrics and by identifying the relationship amongst these measures, system characteristics can be evaluated in order to improve the overall performance of the computing system. By focusing on these measures which play important role in determining the overall performance, service level agreements can also be revised. This paper presents a survey of different performance modeling techniques of big data applications. One of the key concepts in performance modeling is finding relevant parameters which accurately represent performance of big data platforms. These extracted relevant performances measures are mapped onto software qualify concepts which are then used for defining service level agreements.
大量的数据正以惊人的速度由多个来源产生,这些数据通常以不同的格式存在,因此使用传统方法处理数据非常困难。用于处理这类数据的平台依赖于分布式架构,如云计算、Hadoop等。通过挖掘底层平台的特点,可以高效地进行大数据的处理。随着高效算法、软件度量的出现,通过识别这些度量之间的关系,可以评估系统特征,以提高计算系统的整体性能。通过关注这些在决定整体表现方面发挥重要作用的措施,服务水平协议也可以得到修订。本文综述了大数据应用中不同的性能建模技术。性能建模的关键概念之一是寻找能够准确表征大数据平台性能的相关参数。这些提取的相关性能度量被映射到软件资格概念,然后用于定义服务水平协议。
{"title":"Survey of performance modeling of big data applications","authors":"T. Pattanshetti, V. Attar","doi":"10.1109/CONFLUENCE.2017.7943145","DOIUrl":"https://doi.org/10.1109/CONFLUENCE.2017.7943145","url":null,"abstract":"Enormous amount of data is being generated at a tremendous rate by multiple sources, often this data exists in different formats thus making it quite difficult to process the data using traditional methods. The platforms used for processing this type of data rely on distributed architecture like Cloud computing, Hadoop etc. The processing of big data can be efficiently carried out by exploring the characteristics of underlying platforms. With the advent of efficient algorithms, software metrics and by identifying the relationship amongst these measures, system characteristics can be evaluated in order to improve the overall performance of the computing system. By focusing on these measures which play important role in determining the overall performance, service level agreements can also be revised. This paper presents a survey of different performance modeling techniques of big data applications. One of the key concepts in performance modeling is finding relevant parameters which accurately represent performance of big data platforms. These extracted relevant performances measures are mapped onto software qualify concepts which are then used for defining service level agreements.","PeriodicalId":6651,"journal":{"name":"2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence","volume":"7 1","pages":"177-181"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78906843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2017 7th International Conference on Cloud Computing, Data Science & Engineering - Confluence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1