Cryptography is the art of protecting information by encrypting the original message into unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by hash function are called as message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing[2]. The purpose of this study is to apply Huffman data compression algorithm to the SHA1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [1]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.
{"title":"Application of Huffman data compression algorithm in hashing computation","authors":"Lakshmi Narasimha Devulapalli Venkata, M. Atici","doi":"10.1145/3190645.3190715","DOIUrl":"https://doi.org/10.1145/3190645.3190715","url":null,"abstract":"Cryptography is the art of protecting information by encrypting the original message into unreadable format. A cryptographic hash function is a hash function which takes an arbitrary length of text message as input and converts that text into a fixed length of encrypted characters which is infeasible to invert. The values returned by hash function are called as message digest or simply hash values. Because of its versatility, hash functions are used in many applications such as message authentication, digital signatures, and password hashing[2]. The purpose of this study is to apply Huffman data compression algorithm to the SHA1 hash function in cryptography. Huffman data compression algorithm is an optimal compression or prefix algorithm where the frequencies of the letters are used to compress the data [1]. An integrated approach is applied to achieve new compressed hash function by integrating Huffman compressed codes in the core functionality of hashing computation of the original hash function.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123420782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information Gain (IG) or the Kullback Leibler algorithm is a statistical algorithm that is employed to extract useful features from datasets to eliminate redundant and valueless features. Applying this feature selection technique paves way for sophisticated analysis on Big Data, requiring the underlying framework to handle the data complexity, volume and velocity. The Hadoop ecosystem comes in handy, enabling for seamless distributed computing leveraging the computing potential of many commercial machines. Previous research studies [1, 2] indicate that Hive is best suited for data warehousing and ETL (Extract, Transform, Load) workloads. We aim to extend Hive's capability to analyze how it suits analytical algorithms and compare its performance with MapReduce. In this Big Data era, it is essential to design algorithms efficiently to reap the benefits of parallelization over existing frameworks. This study will showcase the efficacy in designing IG for Hadoop framework and discuss the implementation of IG for analytical workload on Hive and MapReduce. Inherently both these components are built over a shared nothing architecture which prevents contention issues increasing data parallelism, thus best-fitting for analytical workloads. Hence, the programmer is relieved from the overhead of maintaining structures like indexes, caches and partitions. Assessing implementation of Information Gain on both these parallel processing components will certainly provide insights on the benefits and downsides that each component should offer and at large will enable researchers and developers to employ appropriate components for suitable tasks.
信息增益(Information Gain, IG)或Kullback Leibler算法是一种从数据集中提取有用特征以消除冗余和无价值特征的统计算法。这种特征选择技术的应用为大数据的复杂分析铺平了道路,需要底层框架来处理数据的复杂性、数量和速度。Hadoop生态系统可以派上用场,利用许多商用机器的计算潜力,实现无缝的分布式计算。先前的研究[1,2]表明Hive最适合数据仓库和ETL (Extract, Transform, Load)工作负载。我们的目标是扩展Hive的能力,分析它如何适合分析算法,并将其性能与MapReduce进行比较。在这个大数据时代,有效地设计算法以获得并行化优于现有框架的好处是至关重要的。本研究将展示IG在Hadoop框架设计中的有效性,并讨论IG在Hive和MapReduce上分析工作负载的实现。从本质上讲,这两个组件都构建在一个无共享架构之上,该架构可以防止争用问题增加数据并行性,因此最适合分析工作负载。因此,程序员从维护索引、缓存和分区等结构的开销中解脱出来。在这两个并行处理组件上评估Information Gain的实现肯定会提供每个组件应该提供的优点和缺点的见解,并且总体上使研究人员和开发人员能够为合适的任务使用适当的组件。
{"title":"A comparative study of mapreduce and hive based on the design of the information gain algorithm for analytical workloads","authors":"S. Bagui, Sharon K. John, John P. Baggs","doi":"10.1145/3190645.3190705","DOIUrl":"https://doi.org/10.1145/3190645.3190705","url":null,"abstract":"Information Gain (IG) or the Kullback Leibler algorithm is a statistical algorithm that is employed to extract useful features from datasets to eliminate redundant and valueless features. Applying this feature selection technique paves way for sophisticated analysis on Big Data, requiring the underlying framework to handle the data complexity, volume and velocity. The Hadoop ecosystem comes in handy, enabling for seamless distributed computing leveraging the computing potential of many commercial machines. Previous research studies [1, 2] indicate that Hive is best suited for data warehousing and ETL (Extract, Transform, Load) workloads. We aim to extend Hive's capability to analyze how it suits analytical algorithms and compare its performance with MapReduce. In this Big Data era, it is essential to design algorithms efficiently to reap the benefits of parallelization over existing frameworks. This study will showcase the efficacy in designing IG for Hadoop framework and discuss the implementation of IG for analytical workload on Hive and MapReduce. Inherently both these components are built over a shared nothing architecture which prevents contention issues increasing data parallelism, thus best-fitting for analytical workloads. Hence, the programmer is relieved from the overhead of maintaining structures like indexes, caches and partitions. Assessing implementation of Information Gain on both these parallel processing components will certainly provide insights on the benefits and downsides that each component should offer and at large will enable researchers and developers to employ appropriate components for suitable tasks.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123788618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an NMF (Nonnegative Matrix Factorization)-based approach in collaborative filtering based recommendation systems to handle the cold-start users issue, especially for the New-Users who did not rate any items. The proposed approach utilizes the trust network information to impute missing ratings before NMF is applied. We do two cases of imputation: (1) when all users are imputed, and (2) when only New-Users are imputed. To study the impact of the imputation, we divide users into three groups and calculate their recommendation errors. Experiments on four different datasets are conducted to examine the proposed approach. The results show that our approach can handle the New-Users issue and reduce the recommendation errors for the whole dataset especially in the second imputation case.
{"title":"Imputing trust network information in NMF-based collaborative filtering","authors":"Fatemah H. Alghamedy, Xiwei Wang, Jun Zhang","doi":"10.1145/3190645.3190672","DOIUrl":"https://doi.org/10.1145/3190645.3190672","url":null,"abstract":"We propose an NMF (Nonnegative Matrix Factorization)-based approach in collaborative filtering based recommendation systems to handle the cold-start users issue, especially for the New-Users who did not rate any items. The proposed approach utilizes the trust network information to impute missing ratings before NMF is applied. We do two cases of imputation: (1) when all users are imputed, and (2) when only New-Users are imputed. To study the impact of the imputation, we divide users into three groups and calculate their recommendation errors. Experiments on four different datasets are conducted to examine the proposed approach. The results show that our approach can handle the New-Users issue and reduce the recommendation errors for the whole dataset especially in the second imputation case.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"257 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116067079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Scott, Jonathan Jeffrey, Blake Heard, D. Nikonov, I. Young, S. Manipatruni, A. Naeemi, R. M. Iraei
This paper proposes a spintronic neuron structure composed of a heterostructure of magnets and a piezoelectric with a magnetic tunnel junction (MTJ). The operation of the device is simulated using SPICE models. Simulation results illustrate that the energy dissipation of the proposed neuron compared to that of other spintronic neurons exhibits 70% improvement. Compared to CMOS neurons, the proposed neuron occupies a smaller footprint area and operates using less energy. Owing to its versatility and low-energy operation, the proposed neuron is a promising candidate to be adopted in artificial neural network (ANN) systems.
{"title":"Hybrid piezoelectric-magnetic neurons: a proposal for energy-efficient machine learning","authors":"William Scott, Jonathan Jeffrey, Blake Heard, D. Nikonov, I. Young, S. Manipatruni, A. Naeemi, R. M. Iraei","doi":"10.1145/3190645.3190688","DOIUrl":"https://doi.org/10.1145/3190645.3190688","url":null,"abstract":"This paper proposes a spintronic neuron structure composed of a heterostructure of magnets and a piezoelectric with a magnetic tunnel junction (MTJ). The operation of the device is simulated using SPICE models. Simulation results illustrate that the energy dissipation of the proposed neuron compared to that of other spintronic neurons exhibits 70% improvement. Compared to CMOS neurons, the proposed neuron occupies a smaller footprint area and operates using less energy. Owing to its versatility and low-energy operation, the proposed neuron is a promising candidate to be adopted in artificial neural network (ANN) systems.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133989959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This extended abstract introduces StreetTraffic, an open-sourced server library that collects and analyzes traffic flow data. By utilizing REST APIs provided by HERE.com, StreetTraffic allows the users to crawl traffic flow data of interested regions or routes. Then, the users could see the visualized traffic flow history of the crawled data, empowering them to understand the historical traffic pattern of their interested routes, which could be valuable to commuters or someone who wants to optimize a trip. The library is currently hosted at Github (https://github.com/streettraffic/streettraffic), along with its documentation and tutorials.
{"title":"StreetTraffic: a library for traffic flow data collection and analysis","authors":"Shengyi Huang, Chris Healy","doi":"10.1145/3190645.3190710","DOIUrl":"https://doi.org/10.1145/3190645.3190710","url":null,"abstract":"This extended abstract introduces StreetTraffic, an open-sourced server library that collects and analyzes traffic flow data. By utilizing REST APIs provided by HERE.com, StreetTraffic allows the users to crawl traffic flow data of interested regions or routes. Then, the users could see the visualized traffic flow history of the crawled data, empowering them to understand the historical traffic pattern of their interested routes, which could be valuable to commuters or someone who wants to optimize a trip. The library is currently hosted at Github (https://github.com/streettraffic/streettraffic), along with its documentation and tutorials.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132621498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present the Diabetic Assistant Tool, a web application developed using the Vue framework. The application, as the name suggests, assists diabetics in maintaining a healthy lifestyle. The tool uses modern designs and gives the users features that have been requested for many years, an easy to use and pleasant to the eye package. The Diabetic Assistant Tool application differentiates itself from other such applications by simplifying the interface and only giving the user information they need in a clear and simple way.
{"title":"Diabetic assistant tool","authors":"Roberto Pagan, H. ElAarag","doi":"10.1145/3190645.3190646","DOIUrl":"https://doi.org/10.1145/3190645.3190646","url":null,"abstract":"In this paper, we present the Diabetic Assistant Tool, a web application developed using the Vue framework. The application, as the name suggests, assists diabetics in maintaining a healthy lifestyle. The tool uses modern designs and gives the users features that have been requested for many years, an easy to use and pleasant to the eye package. The Diabetic Assistant Tool application differentiates itself from other such applications by simplifying the interface and only giving the user information they need in a clear and simple way.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133344908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the average CMO tenure increasing significantly over the past decade, the business press has speculated about reasons for this climb while the academic literature has been relatively silent, remaining indecisive about the contributions of the CMO to firm performance [3, 7, 10]. These mixed results have resulted in calls for more systematic inquiry into the performance consequences of the CMO. This proposal investigates factors associated with CMO tenure. It develops theory based on competitive sorting model whose the underlying intuition is that when the competitiveness of an individual's talents aligns with a firm's strategic directions job tenure increases. We argue that the firm's strategic change has a positive impact on firm performance when the CMO has aligning skills with the firm's strategic shift. Rationales underlying our arguments rely on a competitive sorting model of the CEO labor market [6, 11]. The essential intuition of the model is that CEOs have discernable characteristics that are indicative of their expected productive skills and are matched to firms competitively [4]. We used the sales data for the firms from 2000 to 2014, which were retrieved from Fundamentals Annuals section of COMPUSTAT database [9] and tested the effects of the interaction between the firm (i.e., Long-term business strategy and Data-driven approach) and the CMO variables (i.e., Analytical Ability Index (AAI) and General Ability Index (GAI)) on CMO tenure and the performance implications of the interaction, focusing on the emergence of business culture which transforms diverse aspects of business foundations, data-driven culture. In specific, we suggest a positive relationship between the CMO's characteristics which match to the firm's strategic shifts and CMO tenure and firm performance, because CMOs' distinguished characteristics which are effectively matched to firms are the indicative of their competitive performance consequences [4], which, in turn, are associated with longer tenure. We adopted five proxies of General Ability Index: number of firms, number of industries, CMO experience, number of executive positions, and executive tenure. Following Custodio et al. [2], we reduced the five proxies into one-dimensional index using principal component analysis [14] which extracts common component. We used one component instead of five by employing the method of dimensionality reduction to avoid multicollinearity [5] and minimize measurement error. Because all the proxies for the GAI Index are static variables over time, the General Ability Index is calculated for each CMO, but the index is not varying over time for a CMO. The index is standardized and thus has zero mean and a standard deviation of 1. AAI is computed with the same method using three proxies: number of degrees, degree kind, and functional career experience. To extract the proxies from firm side (firm's valuation on the change in long-term business strategy and in data-driven approach)
{"title":"A study of chief marketing officer (CMO) tenure with competitive sorting model","authors":"Eun Hee Ko, D. Bowman, Sierra Chugg, Dae Wook Kim","doi":"10.1145/3190645.3190717","DOIUrl":"https://doi.org/10.1145/3190645.3190717","url":null,"abstract":"With the average CMO tenure increasing significantly over the past decade, the business press has speculated about reasons for this climb while the academic literature has been relatively silent, remaining indecisive about the contributions of the CMO to firm performance [3, 7, 10]. These mixed results have resulted in calls for more systematic inquiry into the performance consequences of the CMO. This proposal investigates factors associated with CMO tenure. It develops theory based on competitive sorting model whose the underlying intuition is that when the competitiveness of an individual's talents aligns with a firm's strategic directions job tenure increases. We argue that the firm's strategic change has a positive impact on firm performance when the CMO has aligning skills with the firm's strategic shift. Rationales underlying our arguments rely on a competitive sorting model of the CEO labor market [6, 11]. The essential intuition of the model is that CEOs have discernable characteristics that are indicative of their expected productive skills and are matched to firms competitively [4]. We used the sales data for the firms from 2000 to 2014, which were retrieved from Fundamentals Annuals section of COMPUSTAT database [9] and tested the effects of the interaction between the firm (i.e., Long-term business strategy and Data-driven approach) and the CMO variables (i.e., Analytical Ability Index (AAI) and General Ability Index (GAI)) on CMO tenure and the performance implications of the interaction, focusing on the emergence of business culture which transforms diverse aspects of business foundations, data-driven culture. In specific, we suggest a positive relationship between the CMO's characteristics which match to the firm's strategic shifts and CMO tenure and firm performance, because CMOs' distinguished characteristics which are effectively matched to firms are the indicative of their competitive performance consequences [4], which, in turn, are associated with longer tenure. We adopted five proxies of General Ability Index: number of firms, number of industries, CMO experience, number of executive positions, and executive tenure. Following Custodio et al. [2], we reduced the five proxies into one-dimensional index using principal component analysis [14] which extracts common component. We used one component instead of five by employing the method of dimensionality reduction to avoid multicollinearity [5] and minimize measurement error. Because all the proxies for the GAI Index are static variables over time, the General Ability Index is calculated for each CMO, but the index is not varying over time for a CMO. The index is standardized and thus has zero mean and a standard deviation of 1. AAI is computed with the same method using three proxies: number of degrees, degree kind, and functional career experience. To extract the proxies from firm side (firm's valuation on the change in long-term business strategy and in data-driven approach)","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121887259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Code smells are anomalies often generated in design, implementation or maintenance phase of software development life cycle. Researchers established several catalogues characterizing the smells. Fowler and Beck developed the most popular catalogue of 22 smells covering varieties of development issues. This literature presents an overview of the existing research conducted on these 22 smells. Our motivation is to represent these smells with an easier interpretation for the software developers, determine the causes that generate these issues in applications and their impact from different aspects of software maintenance. This paper also highlights previous and recent research on smell detection with an effort to categorize the approaches based on the underlying concept.
{"title":"Causes, impacts, and detection approaches of code smell: a survey","authors":"Md Shariful Haque, Jeffery Carver, T. Atkison","doi":"10.1145/3190645.3190697","DOIUrl":"https://doi.org/10.1145/3190645.3190697","url":null,"abstract":"Code smells are anomalies often generated in design, implementation or maintenance phase of software development life cycle. Researchers established several catalogues characterizing the smells. Fowler and Beck developed the most popular catalogue of 22 smells covering varieties of development issues. This literature presents an overview of the existing research conducted on these 22 smells. Our motivation is to represent these smells with an easier interpretation for the software developers, determine the causes that generate these issues in applications and their impact from different aspects of software maintenance. This paper also highlights previous and recent research on smell detection with an effort to categorize the approaches based on the underlying concept.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126115213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem being addressed in the study is the underrepresentation of African American tenured and tenure-track faculty in the STEM professoriate despite HBCUs producing the most STEM doctoral graduates. This study uses virtual mentoring as a tool to aid undergraduate STEM students in obtaining information on graduate school because mentoring is a crucial element in preparing African American students for tenured positions. The focus of study of the study is to compare which conversational agent interface, the Twitter Conversational Agent or Embodied Conversational Agent, the users find the most effective. The sample is composed of 37 African American HBCU male undergraduate students who are STEM majors and interested in graduate school. The participants must already use Twitter. The same number of students (37) used in Gosha's study will be used for this study in order to receive an accurate comparison between the TCA and ECA. The study has not been executed yet, but it is intended to be executed soon.
{"title":"Development of a Twitter graduate school virtual mentor for HBCU computer science students","authors":"Lelia Hampton, Kinnis Gosha","doi":"10.1145/3190645.3190714","DOIUrl":"https://doi.org/10.1145/3190645.3190714","url":null,"abstract":"The problem being addressed in the study is the underrepresentation of African American tenured and tenure-track faculty in the STEM professoriate despite HBCUs producing the most STEM doctoral graduates. This study uses virtual mentoring as a tool to aid undergraduate STEM students in obtaining information on graduate school because mentoring is a crucial element in preparing African American students for tenured positions. The focus of study of the study is to compare which conversational agent interface, the Twitter Conversational Agent or Embodied Conversational Agent, the users find the most effective. The sample is composed of 37 African American HBCU male undergraduate students who are STEM majors and interested in graduate school. The participants must already use Twitter. The same number of students (37) used in Gosha's study will be used for this study in order to receive an accurate comparison between the TCA and ECA. The study has not been executed yet, but it is intended to be executed soon.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128401943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tyler H. Chang, L. Watson, T. Lux, Bo Li, Li Xu, A. Butt, K. Cameron, Yili Hong
The Delaunay triangulation is a fundamental construct from computational geometry, which finds wide use as a model for multivariate piecewise linear interpolation in fields such as geographic information systems, civil engineering, physics, and computer graphics. Though efficient solutions exist for computation of two- and three-dimensional Delaunay triangulations, the computational complexity for constructing the complete Delaunay triangulation grows exponentially in higher dimensions. Therefore, usage of the Delaunay triangulation as a model for interpolation in high-dimensional domains remains computationally infeasible by standard methods. In this paper, a polynomial time algorithm is presented for interpolating at a finite set of points in arbitrary dimension via the Delaunay triangulation. This is achieved by computing a small subset of the simplices in the complete triangulation, such that all interpolation points lie in the support of the subset. An empirical study on the runtime of the proposed algorithm is presented, demonstrating its scalability to high-dimensional spaces.
{"title":"A polynomial time algorithm for multivariate interpolation in arbitrary dimension via the Delaunay triangulation","authors":"Tyler H. Chang, L. Watson, T. Lux, Bo Li, Li Xu, A. Butt, K. Cameron, Yili Hong","doi":"10.1145/3190645.3190680","DOIUrl":"https://doi.org/10.1145/3190645.3190680","url":null,"abstract":"The Delaunay triangulation is a fundamental construct from computational geometry, which finds wide use as a model for multivariate piecewise linear interpolation in fields such as geographic information systems, civil engineering, physics, and computer graphics. Though efficient solutions exist for computation of two- and three-dimensional Delaunay triangulations, the computational complexity for constructing the complete Delaunay triangulation grows exponentially in higher dimensions. Therefore, usage of the Delaunay triangulation as a model for interpolation in high-dimensional domains remains computationally infeasible by standard methods. In this paper, a polynomial time algorithm is presented for interpolating at a finite set of points in arbitrary dimension via the Delaunay triangulation. This is achieved by computing a small subset of the simplices in the complete triangulation, such that all interpolation points lie in the support of the subset. An empirical study on the runtime of the proposed algorithm is presented, demonstrating its scalability to high-dimensional spaces.","PeriodicalId":403177,"journal":{"name":"Proceedings of the ACMSE 2018 Conference","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125562029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}