This exploratory paper presents findings of the pilot study on the Enterprise Application Integration (EAI) implementation process framework in government. The pilot study's case was conducted at one EAI project in Oman with the intention to investigate implementation factors that influence its process, from the beginning until the end of technology life-cycle. Using Grounded Theory Approach (GTA), 12 factors were found to be influenced in the process of EAI implementation. Although the factors influencing the EAI implementation might appear similar on the surface to the common IT implementation but they are fundamentally not. This might be explained by the stakeholders' involvement throughout the process.
{"title":"Exploring factors influencing implementation process of enterprise application integration (EAI): lessons from government-to-government project in Oman","authors":"F. Al-Balushi, M. Bahari, Azizah Abdul Rahman","doi":"10.1145/3018009.3018018","DOIUrl":"https://doi.org/10.1145/3018009.3018018","url":null,"abstract":"This exploratory paper presents findings of the pilot study on the Enterprise Application Integration (EAI) implementation process framework in government. The pilot study's case was conducted at one EAI project in Oman with the intention to investigate implementation factors that influence its process, from the beginning until the end of technology life-cycle. Using Grounded Theory Approach (GTA), 12 factors were found to be influenced in the process of EAI implementation. Although the factors influencing the EAI implementation might appear similar on the surface to the common IT implementation but they are fundamentally not. This might be explained by the stakeholders' involvement throughout the process.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125912884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of smart grid and big data technologies, the stability and economy of distribution network operation are enhanced effectively. Intelligent power distribution and utilization (IPDU) big data platform, which exchanges operation data with other related distribution network management systems, makes decisions for demand side management, power system and distributed energy operation strategies by analyzing the big data. In order to solve the data fusion and exchange problems among all information systems, we proposed a kind of general information model for multi-source heterogeneous big data. In addition, a data fusion and exchange mechanism is established based on circle buffer to ensure the data quality. Finally, this paper demonstrates the effective of the method of IPDU big data fusion method by the example of distribution network reconfiguration. The method proposed in this paper can satisfy the data exchanging demands of future smart grid and demand side management, and it also has good confluent and extensible feature.
{"title":"Integration and exchange method of multi-source heterogeneous big data for intelligent power distribution and utilization","authors":"Gang Xu, Shunyu Wu, Pengfei Xie","doi":"10.1145/3018009.3018040","DOIUrl":"https://doi.org/10.1145/3018009.3018040","url":null,"abstract":"With the development of smart grid and big data technologies, the stability and economy of distribution network operation are enhanced effectively. Intelligent power distribution and utilization (IPDU) big data platform, which exchanges operation data with other related distribution network management systems, makes decisions for demand side management, power system and distributed energy operation strategies by analyzing the big data. In order to solve the data fusion and exchange problems among all information systems, we proposed a kind of general information model for multi-source heterogeneous big data. In addition, a data fusion and exchange mechanism is established based on circle buffer to ensure the data quality. Finally, this paper demonstrates the effective of the method of IPDU big data fusion method by the example of distribution network reconfiguration. The method proposed in this paper can satisfy the data exchanging demands of future smart grid and demand side management, and it also has good confluent and extensible feature.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"42 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113999952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifan Liu, Yao Sun, Xin'ge Yan, Qiao Li, Fei Wang, Sheeraz Arif
As quality of experience (QoE) concerns more about users' end-to-end subjective experience than quality of service (QoS), it becomes an important performance metric when designing a resource scheduling algorithm. In this paper, we propose a QoE-driven multi-service resource scheduling (QMRS) algorithm aiming at maximizing the QoE of the whole system. In QMRS, a specific utility model is adopted as a normalized QoE evaluation metric of end users, which is highly universalizable and extensible and of great importance for the newborn service evaluation. We use a greedy algorithm based on utility models for different services to optimize the wireless resource allocation in multi-users mobile network. Compared with the traditional proportional fair (PF) scheduling method, the end users' utility value increases from 0.82 to 0.92 in less users condition. In condition of 45 users, the utility value can increase to 0.56 with QMRS method from 0.26 with PF method. The results validate that the proposed QMRS can guarantee users' QoE in different services with limited wireless resource.
由于体验质量(quality of experience, QoE)比服务质量(quality of service, QoS)更关注用户端到端的主观体验,因此在设计资源调度算法时,它成为一个重要的性能指标。本文提出了一种qos驱动的多服务资源调度算法,其目标是使整个系统的QoE最大化。在QMRS中,采用特定实用新型作为最终用户的标准化QoE评价指标,具有高度的通用性和可扩展性,对新生儿服务评价具有重要意义。针对不同业务,采用基于实用新型的贪心算法对多用户移动网络中的无线资源分配进行优化。与传统的比例公平调度方法相比,在用户较少的情况下,终端用户的效用值由0.82提高到0.92。在45个用户的情况下,QMRS法的效用值由PF法的0.26提高到0.56。结果表明,在无线资源有限的情况下,所提出的QMRS能够保证用户在不同业务中的QoE。
{"title":"QoE-driven multi-service resource scheduling strategy in mobile network","authors":"Yifan Liu, Yao Sun, Xin'ge Yan, Qiao Li, Fei Wang, Sheeraz Arif","doi":"10.1145/3018009.3023387","DOIUrl":"https://doi.org/10.1145/3018009.3023387","url":null,"abstract":"As quality of experience (QoE) concerns more about users' end-to-end subjective experience than quality of service (QoS), it becomes an important performance metric when designing a resource scheduling algorithm. In this paper, we propose a QoE-driven multi-service resource scheduling (QMRS) algorithm aiming at maximizing the QoE of the whole system. In QMRS, a specific utility model is adopted as a normalized QoE evaluation metric of end users, which is highly universalizable and extensible and of great importance for the newborn service evaluation. We use a greedy algorithm based on utility models for different services to optimize the wireless resource allocation in multi-users mobile network. Compared with the traditional proportional fair (PF) scheduling method, the end users' utility value increases from 0.82 to 0.92 in less users condition. In condition of 45 users, the utility value can increase to 0.56 with QMRS method from 0.26 with PF method. The results validate that the proposed QMRS can guarantee users' QoE in different services with limited wireless resource.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134178446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meng Tian, Jianqiang Li, Jijiang Yang, Bo Liu, Xi Meng, Ronghua Li, J. Bi
With the increasing interests of second use of medical data, concept extraction in Electronic Medical Records has drawn more and more scholars' attention. Owing to the artificial data annotation task is labor intensive, the method of concept extraction is mainly to use the fully labeled documents as training data in order to build a concept instance identifier. However, in many cases, the available training data are sparse labeling. This fact makes the performance of the constructed classifier is poor. Existing methods for extracting concepts either considered the diversity of datasets or considered the various learning models. Therefore, this paper proposes a novel approach to improve the performance of concept extraction from electronic medical records by combining the diversity of datasets with the various learning models. The large sparsely labeled dataset is split into multiple subsets. Then the different subsets are trained by different learning models, such as HMM, MEMM, and CRF, in an iterative way. Our technique leverages off the fact that different learning algorithms have different inductive biases and that better predictions can be made by the voted majority.
{"title":"Exploiting collaborative learning for concept extraction in the medical field","authors":"Meng Tian, Jianqiang Li, Jijiang Yang, Bo Liu, Xi Meng, Ronghua Li, J. Bi","doi":"10.1145/3018009.3018054","DOIUrl":"https://doi.org/10.1145/3018009.3018054","url":null,"abstract":"With the increasing interests of second use of medical data, concept extraction in Electronic Medical Records has drawn more and more scholars' attention. Owing to the artificial data annotation task is labor intensive, the method of concept extraction is mainly to use the fully labeled documents as training data in order to build a concept instance identifier. However, in many cases, the available training data are sparse labeling. This fact makes the performance of the constructed classifier is poor. Existing methods for extracting concepts either considered the diversity of datasets or considered the various learning models. Therefore, this paper proposes a novel approach to improve the performance of concept extraction from electronic medical records by combining the diversity of datasets with the various learning models. The large sparsely labeled dataset is split into multiple subsets. Then the different subsets are trained by different learning models, such as HMM, MEMM, and CRF, in an iterative way. Our technique leverages off the fact that different learning algorithms have different inductive biases and that better predictions can be made by the voted majority.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125749299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The advancing of information and communication technology (ICT) innovation has led to various significant impacts in a number of different fields of study, including the education sector. Higher education institutions such as universities should introduce the use ICT to support the teaching and learning processes. It will allow students to have the authority and flexibility to manage their own study time, especially during working on final project or thesis. The introduction a website that act as an e-learning tools that have social media features, might be able to promote the study effectiveness. Students will be able communicate with the supervisors or peers through the system. Additionally, supervisors can still play their role as a mentor and coach to motivate the students to complete their project on time. This research is meant to be the initial step to the development of an e-learning system. It tries to find the significance of e-learning tools features offered, in order to support students during their study. The features are validated with one of Technology Acceptance Model (TAM) variable called Perceived Usefulness. The result of this study will then be implemented in the system.
{"title":"Social media application features to support coaching and mentoring process for student final project","authors":"Kartika Gianina Tileng, Stephanus Eko Wahyudi","doi":"10.1145/3018009.3018025","DOIUrl":"https://doi.org/10.1145/3018009.3018025","url":null,"abstract":"The advancing of information and communication technology (ICT) innovation has led to various significant impacts in a number of different fields of study, including the education sector. Higher education institutions such as universities should introduce the use ICT to support the teaching and learning processes. It will allow students to have the authority and flexibility to manage their own study time, especially during working on final project or thesis. The introduction a website that act as an e-learning tools that have social media features, might be able to promote the study effectiveness. Students will be able communicate with the supervisors or peers through the system. Additionally, supervisors can still play their role as a mentor and coach to motivate the students to complete their project on time. This research is meant to be the initial step to the development of an e-learning system. It tries to find the significance of e-learning tools features offered, in order to support students during their study. The features are validated with one of Technology Acceptance Model (TAM) variable called Perceived Usefulness. The result of this study will then be implemented in the system.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128259901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedestrian detection requires both reliable performance and fast processing. Stereo-based pedestrian detectors meet these requirements due to a hypotheses generation processing. However, noisy depth images increase the difficulty of robustly estimating the road line in various road environments. This problem results in inaccurate candidate bounding boxes and complicates the correct classification of the bounding boxes. In this letter, we propose a dynamic ground plane estimation method to manage this problem. Our approach estimates the ground plane optimally using a posterior probability that combines a prior probability and several uncertain observations due to cluttered road environments. Our approach estimates a ground plane optimally using a posterior probability which combines a prior probability and several uncertain observations due to cluttered road environments. The experimental results demonstrate that the proposed method estimates the ground plane robustly and accurately in noisy depth images and also a stereo-based pedestrian detector using the proposed method outperforms previous state-of-the art detectors with less complexity.
{"title":"Stereo-based pedestrian detection using the dynamic ground plane estimation method","authors":"Y. Lim, M. Kang","doi":"10.1145/3018009.3018035","DOIUrl":"https://doi.org/10.1145/3018009.3018035","url":null,"abstract":"Pedestrian detection requires both reliable performance and fast processing. Stereo-based pedestrian detectors meet these requirements due to a hypotheses generation processing. However, noisy depth images increase the difficulty of robustly estimating the road line in various road environments. This problem results in inaccurate candidate bounding boxes and complicates the correct classification of the bounding boxes. In this letter, we propose a dynamic ground plane estimation method to manage this problem. Our approach estimates the ground plane optimally using a posterior probability that combines a prior probability and several uncertain observations due to cluttered road environments. Our approach estimates a ground plane optimally using a posterior probability which combines a prior probability and several uncertain observations due to cluttered road environments. The experimental results demonstrate that the proposed method estimates the ground plane robustly and accurately in noisy depth images and also a stereo-based pedestrian detector using the proposed method outperforms previous state-of-the art detectors with less complexity.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127463603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abduction, swapping and mix-ups are the unfortunate events that could happen to newborn while in hospital premises and medical personnel are finding it difficult to curb this unfortunate incident. Traditional methods like birth ID bracelets and offline footprint recognition systems have their own drawbacks. Hence, a neonatalonline personal authentication system is proposed for this issue based on multimodal biometric system wherein footprint and palm print of neonatal is used for recognition. This concept is further enhanced by developing a prototype to be implemented on a Raspberry Pi 2 (a single board computer). In this paper, SIFT feature extraction, RANSAC algorithm for identification of matched interest points of palm print and footprint biometrics using OpenCV on Raspberry pi is implemented. The Raspberry Pi is a quad core ARM Cortex A7 application processor, System on chip (SoC) denoted as Broadcom BCM2836. It enhances performance, consumes less power, and reduces overall system cost and size. The Raspberry Pi is been controlled by a modified version of Debian Linux OS optimized for ARM architecture. The image recognition is performed using open source OpenCV-3.1.0 in Linux platform using CMake, g++, makefile. Thereby the proposed system improves the security system in hospitals / birth centers and provides a low cost solution to the newborn swapping rather than the expensive DNA and HLA(Human Leukocyte Antigen)typing procedures. The efficiency(97.2%) is high when multimodality is used than unimodality. This paper elucidates the research works carried on hardware as a biometric module to enhance the performance of a standalone device.
绑架、交换和混淆是新生儿在医院可能发生的不幸事件,医务人员发现很难遏制这种不幸事件。出生身份手镯和离线足迹识别系统等传统方法也有自己的缺点。因此,本文提出了一种基于多模态生物识别系统的新生儿在线个人认证系统,其中使用新生儿的足迹和掌纹进行识别。通过开发在Raspberry Pi 2(单板计算机)上实现的原型,进一步增强了这一概念。本文利用OpenCV在树莓派上实现了SIFT特征提取、RANSAC算法对掌纹和足迹生物特征匹配兴趣点的识别。树莓派是一个四核ARM Cortex A7应用处理器,系统芯片(SoC)表示为博通BCM2836。它提高了性能,消耗更少的功率,并降低了整体系统成本和尺寸。树莓派是由针对ARM架构优化的Debian Linux操作系统的修改版本控制的。图像识别是在Linux平台下使用开源的OpenCV-3.1.0,使用CMake, g++, makefile进行的。因此,该系统改善了医院/生育中心的安全系统,并为新生儿交换提供了低成本的解决方案,而不是昂贵的DNA和HLA(人类白细胞抗原)分型程序。多式联运的效率(97.2%)高于单式联运。本文阐述了在硬件上作为生物识别模块进行的研究工作,以提高独立设备的性能。
{"title":"Implementation of multimodal neonatal identification using Raspberry Pi 2","authors":"S. Sumathi, R. Poornima, T. Haripriya","doi":"10.1145/3018009.3018043","DOIUrl":"https://doi.org/10.1145/3018009.3018043","url":null,"abstract":"Abduction, swapping and mix-ups are the unfortunate events that could happen to newborn while in hospital premises and medical personnel are finding it difficult to curb this unfortunate incident. Traditional methods like birth ID bracelets and offline footprint recognition systems have their own drawbacks. Hence, a neonatalonline personal authentication system is proposed for this issue based on multimodal biometric system wherein footprint and palm print of neonatal is used for recognition. This concept is further enhanced by developing a prototype to be implemented on a Raspberry Pi 2 (a single board computer). In this paper, SIFT feature extraction, RANSAC algorithm for identification of matched interest points of palm print and footprint biometrics using OpenCV on Raspberry pi is implemented. The Raspberry Pi is a quad core ARM Cortex A7 application processor, System on chip (SoC) denoted as Broadcom BCM2836. It enhances performance, consumes less power, and reduces overall system cost and size. The Raspberry Pi is been controlled by a modified version of Debian Linux OS optimized for ARM architecture. The image recognition is performed using open source OpenCV-3.1.0 in Linux platform using CMake, g++, makefile. Thereby the proposed system improves the security system in hospitals / birth centers and provides a low cost solution to the newborn swapping rather than the expensive DNA and HLA(Human Leukocyte Antigen)typing procedures. The efficiency(97.2%) is high when multimodality is used than unimodality. This paper elucidates the research works carried on hardware as a biometric module to enhance the performance of a standalone device.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128909287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is a well-known fact that a variety of cluster analysis techniques exist to group objects which have characteristics related to one another. But the fact of the matter is the implementation of many of these techniques poses a great challenge because of the fact that much of the data contained in today's database is categorical in nature. Despite the fact that there have been recent advances in algorithms for clustering categorical data, some are unable to handle uncertainty in the clustering process while others have stability issues. In this paper, it is intended to propose an effective method for text similarity based clustering technique. At first the relevant features are selected from the input dataset. Thus the relevant features are clustered based on the A Possibilistic Fuzzy C-Means Clustering Algorithm (PFCM). Here the features used for clustering will be the similarity between the categorical data. The similarity measure is presented namely SMTP (similarity measure for text processing) for the two categorical data. Clustering based proposed method has high probability of producing a useful subset and independent features. To improve the efficiency of the proposed method, construct the minimum spanning tree by an optimization algorithm. Here adaptive artificial bee colony algorithm (AABC) is used for the purpose of selecting the optimal features. The performance of the proposed technique is evaluated by clustering accuracy, Jaccard coefficient and Dice's coefficient. The proposed method will be implemented in MATLAB platform using machine learning repository.
{"title":"Clustering for high dimensional categorical data based on text similarity","authors":"G. S. Narayana, D. Vasumathi","doi":"10.1145/3018009.3018022","DOIUrl":"https://doi.org/10.1145/3018009.3018022","url":null,"abstract":"It is a well-known fact that a variety of cluster analysis techniques exist to group objects which have characteristics related to one another. But the fact of the matter is the implementation of many of these techniques poses a great challenge because of the fact that much of the data contained in today's database is categorical in nature. Despite the fact that there have been recent advances in algorithms for clustering categorical data, some are unable to handle uncertainty in the clustering process while others have stability issues. In this paper, it is intended to propose an effective method for text similarity based clustering technique. At first the relevant features are selected from the input dataset. Thus the relevant features are clustered based on the A Possibilistic Fuzzy C-Means Clustering Algorithm (PFCM). Here the features used for clustering will be the similarity between the categorical data. The similarity measure is presented namely SMTP (similarity measure for text processing) for the two categorical data. Clustering based proposed method has high probability of producing a useful subset and independent features. To improve the efficiency of the proposed method, construct the minimum spanning tree by an optimization algorithm. Here adaptive artificial bee colony algorithm (AABC) is used for the purpose of selecting the optimal features. The performance of the proposed technique is evaluated by clustering accuracy, Jaccard coefficient and Dice's coefficient. The proposed method will be implemented in MATLAB platform using machine learning repository.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128962793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Kanchana, G. Madushanka, H. Maduranga, M. Udayanga, D. Meedeniya, Galhenage Indika Udaya Shantha Perera
Visualization plays a major role in data mining process to convey the findings properly to the users. It is important to select the most appropriate visualization method for a given data set with the right context. Often the data scientists and analysts have to work with data that come from unknown domains; the lack of domain knowledge is a prime reason for incorporating either inappropriate or not optimal visualization techniques. Domain experts can easily recommend commonly used best visualization types for a given data set in that domain. However, availability of a domain expert in every data analysis project cannot be guaranteed. This paper proposes an automated system for suggesting the most suitable visualization method for a given dataset using state of the art recommendation process. Our system is capable of identifying and matching the context of the data to a range of chart types used in mainstream data analytics. This will enable the data scientists to make visualization decisions with limited domain knowledge.
{"title":"Context aware recommendation for data visualization","authors":"W. Kanchana, G. Madushanka, H. Maduranga, M. Udayanga, D. Meedeniya, Galhenage Indika Udaya Shantha Perera","doi":"10.1145/3018009.3018027","DOIUrl":"https://doi.org/10.1145/3018009.3018027","url":null,"abstract":"Visualization plays a major role in data mining process to convey the findings properly to the users. It is important to select the most appropriate visualization method for a given data set with the right context. Often the data scientists and analysts have to work with data that come from unknown domains; the lack of domain knowledge is a prime reason for incorporating either inappropriate or not optimal visualization techniques. Domain experts can easily recommend commonly used best visualization types for a given data set in that domain. However, availability of a domain expert in every data analysis project cannot be guaranteed. This paper proposes an automated system for suggesting the most suitable visualization method for a given dataset using state of the art recommendation process. Our system is capable of identifying and matching the context of the data to a range of chart types used in mainstream data analytics. This will enable the data scientists to make visualization decisions with limited domain knowledge.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aruni Nisansala, G. Dias, N. Kodikara, M. Weerasinghe, D. Sandaruwan, C. Keppitiyagama, Nuwan Dammika
Surgery simulation platform is a combination of three components; deformable model; input output method and; collision detection method. Throughout the literature there are number of techniques, algorithms and mechanisms have been proposed to enhance the performances of those modules. In this paper we presents an extensive literature review on deformable object modeling algorithms, collision detection methods, haptic devices, haptic force feedback and rendering mechanism. Stiffness value is the governing parameter which decides the overall performance as well as the realism of the deformable models. With the stiffness it can increase or decrease the flexibility of the model. With the haptic force feedback it can sense the flexibility of the model. Hence it is important to impose an acceptable stiffness on model to enhance the user realism. Based on the methods which have been used to implement the deformable model, the acceptable stiffness value range may vary. In this paper it has discussed the stiffness parameter extraction process for the designed deformable gallbladder model under certain constraints and also has proposed an acceptable stiffness value range. The process has been evaluated based on the young modulus value of the live gallbladder tissue.
{"title":"Stiffness parameter evaluation for graphical and haptic gallbladder model","authors":"Aruni Nisansala, G. Dias, N. Kodikara, M. Weerasinghe, D. Sandaruwan, C. Keppitiyagama, Nuwan Dammika","doi":"10.1145/3018009.3018024","DOIUrl":"https://doi.org/10.1145/3018009.3018024","url":null,"abstract":"Surgery simulation platform is a combination of three components; deformable model; input output method and; collision detection method. Throughout the literature there are number of techniques, algorithms and mechanisms have been proposed to enhance the performances of those modules. In this paper we presents an extensive literature review on deformable object modeling algorithms, collision detection methods, haptic devices, haptic force feedback and rendering mechanism. Stiffness value is the governing parameter which decides the overall performance as well as the realism of the deformable models. With the stiffness it can increase or decrease the flexibility of the model. With the haptic force feedback it can sense the flexibility of the model. Hence it is important to impose an acceptable stiffness on model to enhance the user realism. Based on the methods which have been used to implement the deformable model, the acceptable stiffness value range may vary. In this paper it has discussed the stiffness parameter extraction process for the designed deformable gallbladder model under certain constraints and also has proposed an acceptable stiffness value range. The process has been evaluated based on the young modulus value of the live gallbladder tissue.","PeriodicalId":189252,"journal":{"name":"Proceedings of the 2nd International Conference on Communication and Information Processing","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125032199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}