首页 > 最新文献

Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)最新文献

英文 中文
Integrated execution framework for catastrophe modeling 灾变建模的集成执行框架
Yimin Yang, Daniel Lopez, Haiman Tian, Samira Pouyanfar, Fausto Fleites, Shu‐Ching Chen, S. Hamid
Home insurance is a critical issue in the state of Florida, considering that residential properties are exposed to hurricane risk each year. To assess hurricane risk and project insured losses, the Florida Public Hurricane Loss Model (FPHLM) funded by the states insurance regulatory agency was developed. The FPHLM is an open and public model that offers an integrated complex computing framework that can be described in two phases: execution and validation. In the execution phase, all major components of FPHLM (i.e., data pre-processing, Wind Speed Correction (WSC), and Insurance Loss Model (ILM)) are seamlessly integrated and sequentially carried out by following a coordination workflow, where each component is modeled as an execution element governed by the centralized data-transfer element. In the validation phase, semantic rules provided by domain experts for individual component are applied to verify the validity of model output. This paper presents how the model efficiently incorporates the various components from multiple disciplines in an integrated execution framework to address the challenges that make the FPHLM unique.
考虑到住宅物业每年都面临飓风风险,房屋保险是佛罗里达州的一个关键问题。为了评估飓风风险和项目保险损失,由州保险监管机构资助开发了佛罗里达州公共飓风损失模型(FPHLM)。FPHLM是一个开放的公共模型,它提供了一个集成的复杂计算框架,可以分为两个阶段:执行和验证。在执行阶段,FPHLM的所有主要组件(即数据预处理,风速校正(WSC)和保险损失模型(ILM))通过遵循协调工作流无缝集成并顺序执行,其中每个组件都被建模为由集中数据传输元素管理的执行元素。在验证阶段,应用领域专家为各个组件提供的语义规则来验证模型输出的有效性。本文介绍了该模型如何有效地将来自多个学科的各种组件整合到一个集成的执行框架中,以解决使FPHLM独一无二的挑战。
{"title":"Integrated execution framework for catastrophe modeling","authors":"Yimin Yang, Daniel Lopez, Haiman Tian, Samira Pouyanfar, Fausto Fleites, Shu‐Ching Chen, S. Hamid","doi":"10.1109/ICOSC.2015.7050807","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050807","url":null,"abstract":"Home insurance is a critical issue in the state of Florida, considering that residential properties are exposed to hurricane risk each year. To assess hurricane risk and project insured losses, the Florida Public Hurricane Loss Model (FPHLM) funded by the states insurance regulatory agency was developed. The FPHLM is an open and public model that offers an integrated complex computing framework that can be described in two phases: execution and validation. In the execution phase, all major components of FPHLM (i.e., data pre-processing, Wind Speed Correction (WSC), and Insurance Loss Model (ILM)) are seamlessly integrated and sequentially carried out by following a coordination workflow, where each component is modeled as an execution element governed by the centralized data-transfer element. In the validation phase, semantic rules provided by domain experts for individual component are applied to verify the validity of model output. This paper presents how the model efficiently incorporates the various components from multiple disciplines in an integrated execution framework to address the challenges that make the FPHLM unique.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133329175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A distributed SVM method based on the iterative MapReduce 基于迭代MapReduce的分布式支持向量机方法
Xijiang Ke, Hai Jin, Xia Xie, Jie Cao
Linear classification is useful in many applications, but training large-scale data remains an important research issue. Recent advances in linear classification have shown that distributed methods can be efficient in improving the training time. However, for most of the existing training methods,based on MPI or Hadoop, the communication between nodes is the bottleneck. To shorten the communication between nodes, we propose and analyze a method for distributed support vector machine and implement it on an iterative MapReduce framework. Through our distributed method, the local SVMs are generic and can make use of the state-of-the-art SVM solvers. Unlike previous attempts to parallelize SVMs the algorithm does not make assumptions on the density of the support vectors, i.e., the efficiency of the algorithm holds also for the “difficult” cases where the number of support vectors is very high. The performance of the our method is evaluated in an experimental environment. By partitioning the training dataset into smaller subsets and optimizing the partitioned subsets across a cluster of computers, we reduce the training time significantly while maintaining a high level of accuracy in both binary and multiclass classifications.
线性分类在许多应用中都很有用,但训练大规模数据仍然是一个重要的研究问题。线性分类的最新进展表明,分布式方法可以有效地缩短训练时间。然而,对于大多数现有的基于MPI或Hadoop的训练方法来说,节点之间的通信是瓶颈。为了缩短节点间的通信,我们提出并分析了一种分布式支持向量机的方法,并在迭代MapReduce框架上实现。通过我们的分布式方法,局部支持向量机具有通用性,可以利用最先进的支持向量机求解器。与之前的并行化支持向量机的尝试不同,该算法不假设支持向量的密度,也就是说,该算法的效率也适用于支持向量数量非常高的“困难”情况。在实验环境中对该方法的性能进行了评价。通过将训练数据集划分为更小的子集,并在一组计算机上优化划分的子集,我们显着减少了训练时间,同时在二进制和多类分类中保持了高水平的准确性。
{"title":"A distributed SVM method based on the iterative MapReduce","authors":"Xijiang Ke, Hai Jin, Xia Xie, Jie Cao","doi":"10.1109/ICOSC.2015.7050788","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050788","url":null,"abstract":"Linear classification is useful in many applications, but training large-scale data remains an important research issue. Recent advances in linear classification have shown that distributed methods can be efficient in improving the training time. However, for most of the existing training methods,based on MPI or Hadoop, the communication between nodes is the bottleneck. To shorten the communication between nodes, we propose and analyze a method for distributed support vector machine and implement it on an iterative MapReduce framework. Through our distributed method, the local SVMs are generic and can make use of the state-of-the-art SVM solvers. Unlike previous attempts to parallelize SVMs the algorithm does not make assumptions on the density of the support vectors, i.e., the efficiency of the algorithm holds also for the “difficult” cases where the number of support vectors is very high. The performance of the our method is evaluated in an experimental environment. By partitioning the training dataset into smaller subsets and optimizing the partitioned subsets across a cluster of computers, we reduce the training time significantly while maintaining a high level of accuracy in both binary and multiclass classifications.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130537206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
SPARQL based mapping management 基于SPARQL的映射管理
Alan Meehan, Rob Brennan, D. O’Sullivan
The Linked Data (LD) Cloud consists of LD sources covering a wide variety of topics. These data sources use formal vocabularies to represent their data and in many cases, they use heterogeneous vocabularies to represent data about the same topics. This data heterogeneity must be overcome to effectively integrate and consume data from the LD Cloud. Mappings overcome this data heterogeneity by transforming heterogeneous source data to a common target vocabulary. As new data sources emerge and existing ones change over time, new mappings must be created and existing ones maintained. Management of these mappings is an important issue but often neglected. Lack of a mapping management method decreases the ease of finding mappings for sharing, reuse and maintenance purposes. In this paper we present a method for the management of mappings between LD sources - SPARQL Based Mapping Management (SBMM). The SBMM method involves the use of SPARQL queries to perform analysis and maintenance over an RDF-based mapping representation. We present the results from an experiment that compared the analytical affordance of an RDF-based mapping representation we previously devised, called the SPARQL Centric Mapping (SCM) representation, compared to the R2R Mapping Language.
关联数据(LD)云由涵盖各种主题的LD源组成。这些数据源使用正式词汇表来表示它们的数据,在许多情况下,它们使用异构词汇表来表示关于相同主题的数据。必须克服这种数据异构性,才能有效地集成和使用来自LD Cloud的数据。映射通过将异构源数据转换为通用目标词汇表来克服这种数据异构性。随着新数据源的出现和现有数据源的变化,必须创建新的映射并维护现有的映射。这些映射的管理是一个重要的问题,但经常被忽视。缺乏映射管理方法降低了为共享、重用和维护目的查找映射的便利性。本文提出了一种基于SPARQL的映射管理方法(SBMM)。SBMM方法涉及使用SPARQL查询对基于rdf的映射表示执行分析和维护。我们展示了一项实验的结果,该实验比较了我们之前设计的基于rdf的映射表示(称为SPARQL Centric mapping (SCM)表示)与R2R映射语言的分析能力。
{"title":"SPARQL based mapping management","authors":"Alan Meehan, Rob Brennan, D. O’Sullivan","doi":"10.1109/ICOSC.2015.7050851","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050851","url":null,"abstract":"The Linked Data (LD) Cloud consists of LD sources covering a wide variety of topics. These data sources use formal vocabularies to represent their data and in many cases, they use heterogeneous vocabularies to represent data about the same topics. This data heterogeneity must be overcome to effectively integrate and consume data from the LD Cloud. Mappings overcome this data heterogeneity by transforming heterogeneous source data to a common target vocabulary. As new data sources emerge and existing ones change over time, new mappings must be created and existing ones maintained. Management of these mappings is an important issue but often neglected. Lack of a mapping management method decreases the ease of finding mappings for sharing, reuse and maintenance purposes. In this paper we present a method for the management of mappings between LD sources - SPARQL Based Mapping Management (SBMM). The SBMM method involves the use of SPARQL queries to perform analysis and maintenance over an RDF-based mapping representation. We present the results from an experiment that compared the analytical affordance of an RDF-based mapping representation we previously devised, called the SPARQL Centric Mapping (SCM) representation, compared to the R2R Mapping Language.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130261769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A system for analysis and comparison of social network profiles 一个分析和比较社会网络档案的系统
D. Terrana, A. Augello, G. Pilato
This work proposes a system for the analysis and the comparison of users profiles in social networks. Posts are extracted and analyzed in order to detect similar contents, like topics, sentiments and writing styles. A case study regarding the analysis of the authenticity of profiles of the Italian prime minister in different social networks is illustrated.
本文提出了一个分析和比较社交网络用户档案的系统。文章被提取和分析,以检测类似的内容,如主题、情绪和写作风格。关于意大利总理在不同的社交网络资料的真实性分析的案例研究是说明。
{"title":"A system for analysis and comparison of social network profiles","authors":"D. Terrana, A. Augello, G. Pilato","doi":"10.1109/ICOSC.2015.7050787","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050787","url":null,"abstract":"This work proposes a system for the analysis and the comparison of users profiles in social networks. Posts are extracted and analyzed in order to detect similar contents, like topics, sentiments and writing styles. A case study regarding the analysis of the authenticity of profiles of the Italian prime minister in different social networks is illustrated.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126278116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A CFL-ontology model for carbon footprint reasoning 碳足迹推理的节能灯本体模型
Wei Zhu, Guang Zhou, I. Yen, San-Yih Hwang
As the carbon emission becomes a serious problem, a lot of research works now focus on how to monitor and manage carbon footprints. One promising approach is to create a “carbon footprint aware” world to expose people to the carbon footprints associated with the products they buy and the services they use. Carbon footprint labeling (CFL) of products enables the consumers to choose their products not only based on quality and cost, but also based on their carbon footprints. Similarly, carbon footprints of common activities and services can also be labeled to enable informed choices. CFL can impact the supply chain operations as well. With the carbon footprint information, the carbon-footprint-optimal supply chain can be identified to model the supply chains with least carbon emissions. Existing carbon footprint management systems mostly rely on databases to maintain carbon footprint data. But database alone is not sufficient for carbon footprint labeling. In this paper, we develop an ontology model, CFL-ontology, to specify how products are produced, the processes involved in activities and services, and the computation functions to derive the carbon footprints of the products, activities, and services, based on the associated descriptions. With the CFL-ontology, reasoning can be performed to automatically derive the carbon footprint labels for individual products and services.
随着碳排放问题的日益严重,如何对碳足迹进行监测和管理成为研究热点。一个有希望的方法是创造一个“碳足迹意识”的世界,让人们了解与他们购买的产品和使用的服务相关的碳足迹。产品的碳足迹标签(CFL)使消费者不仅根据质量和成本来选择产品,而且还根据其碳足迹来选择产品。同样,也可以标记常见活动和服务的碳足迹,以便做出明智的选择。CFL也可以影响供应链运作。利用碳足迹信息,可以识别出碳足迹最优供应链,建立碳排放最少的供应链模型。现有的碳足迹管理系统大多依靠数据库来维护碳足迹数据。但仅凭数据库是不够的碳足迹标签。在本文中,我们开发了一个本体模型,cfl本体,以指定产品的生产方式,活动和服务所涉及的过程,以及基于相关描述的计算函数来推导产品,活动和服务的碳足迹。使用cfl本体,可以执行推理以自动派生单个产品和服务的碳足迹标签。
{"title":"A CFL-ontology model for carbon footprint reasoning","authors":"Wei Zhu, Guang Zhou, I. Yen, San-Yih Hwang","doi":"10.1109/ICOSC.2015.7050810","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050810","url":null,"abstract":"As the carbon emission becomes a serious problem, a lot of research works now focus on how to monitor and manage carbon footprints. One promising approach is to create a “carbon footprint aware” world to expose people to the carbon footprints associated with the products they buy and the services they use. Carbon footprint labeling (CFL) of products enables the consumers to choose their products not only based on quality and cost, but also based on their carbon footprints. Similarly, carbon footprints of common activities and services can also be labeled to enable informed choices. CFL can impact the supply chain operations as well. With the carbon footprint information, the carbon-footprint-optimal supply chain can be identified to model the supply chains with least carbon emissions. Existing carbon footprint management systems mostly rely on databases to maintain carbon footprint data. But database alone is not sufficient for carbon footprint labeling. In this paper, we develop an ontology model, CFL-ontology, to specify how products are produced, the processes involved in activities and services, and the computation functions to derive the carbon footprints of the products, activities, and services, based on the associated descriptions. With the CFL-ontology, reasoning can be performed to automatically derive the carbon footprint labels for individual products and services.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126846394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Data-driven solutions for building environmental impact assessment 数据驱动的建筑环境影响评估解决方案
Qifeng Zhou, Hao Zhou, Yimin Zhu, Tao Li
Life cycle assessment (LCA) as a decision support tool for evaluating the environmental load of products has been widely used in many fields. However, applying LCA in the building industry is expensive and time consuming. This is due to the complexity of building structure along with a large amount of high-dimensional heterogeneous building data. So far building environmental impact assessment (BEIA) is an important yet under-addressed issue. This paper gives a brief survey of BEIA and investigates potential advantages of using data mining techniques to discover the relationships between building materials and environment impacts. We formulate three important BEIA issues as a series of data mining problems, and propose corresponding solution schemes. Specifically, first, a feature selection approach is proposed based on the practical demand and construction characteristics to perform assessment analysis. Second, a unified framework for solving constraint-based clustering ensemble selection is proposed to extend the environmental impact assessment range from the building level to the regional level. Finally, a multiple disparate clustering method is presented to help sustainable new buildings design. We expect our proposal would shed light on data-driven approaches for environment impact assessment.
生命周期评价(LCA)作为一种评价产品环境负荷的决策支持工具,在许多领域得到了广泛的应用。然而,在建筑行业中应用LCA是昂贵且耗时的。这是由于建筑结构的复杂性以及大量高维异构建筑数据。迄今为止,建筑环境影响评价(BEIA)是一个重要但未得到充分重视的问题。本文简要介绍了BEIA的概况,并探讨了使用数据挖掘技术发现建筑材料与环境影响之间关系的潜在优势。我们将BEIA的三个重要问题归纳为一系列数据挖掘问题,并提出相应的解决方案。具体而言,首先,提出了基于实际需求和建筑特点的特征选择方法进行评价分析;其次,提出了基于约束的聚类集合选择统一框架,将环境影响评价范围从建筑层面扩展到区域层面;最后,提出了一种多异构聚类方法来帮助可持续的新建筑设计。我们希望我们的建议能够阐明数据驱动的环境影响评估方法。
{"title":"Data-driven solutions for building environmental impact assessment","authors":"Qifeng Zhou, Hao Zhou, Yimin Zhu, Tao Li","doi":"10.1109/ICOSC.2015.7050826","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050826","url":null,"abstract":"Life cycle assessment (LCA) as a decision support tool for evaluating the environmental load of products has been widely used in many fields. However, applying LCA in the building industry is expensive and time consuming. This is due to the complexity of building structure along with a large amount of high-dimensional heterogeneous building data. So far building environmental impact assessment (BEIA) is an important yet under-addressed issue. This paper gives a brief survey of BEIA and investigates potential advantages of using data mining techniques to discover the relationships between building materials and environment impacts. We formulate three important BEIA issues as a series of data mining problems, and propose corresponding solution schemes. Specifically, first, a feature selection approach is proposed based on the practical demand and construction characteristics to perform assessment analysis. Second, a unified framework for solving constraint-based clustering ensemble selection is proposed to extend the environmental impact assessment range from the building level to the regional level. Finally, a multiple disparate clustering method is presented to help sustainable new buildings design. We expect our proposal would shed light on data-driven approaches for environment impact assessment.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114992534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Estimation of character diagram from open-movie databases for cultural understanding 基于文化理解的开放电影数据库字符图估计
Yuta Ohwatari, Takahiro Kawamura, Y. Sei, Yasuyuki Tahara, Akihiko Ohsuga
In many movies, cultures, social conditions, and awareness of the issues of the times are depicted in any form. Even if fantasy and SF are works far from reality, the stories do mirror the real world. Therefore, we assumed to be able to understand social conditions and cultures of the real world by analyzing the movie. As a way to analyze the film, we decided to estimate the interpersonal relationships between the characters in the movies. In this paper, we propose a method of estimating interpersonal relationships of the characters using Markov Logic Network from movie script databases on the Web. Markov Logic Network is a probabilistic logic network that can describe the relationships between characters, which are not necessarily satisfied on every occasion. In experiments, we confirmed that our proposed method can estimate favors between the characters in a movie with a precision of 64.2%. Finally, by comparing the estimated relationships with social indicators, we discussed the relevance of the movie to the real world.
在许多电影中,文化、社会状况和对时代问题的认识以任何形式被描绘出来。尽管奇幻小说和科幻小说远离现实,但它们的故事确实反映了现实世界。因此,我们认为可以通过分析电影来了解现实世界的社会状况和文化。作为分析电影的一种方法,我们决定估计电影中人物之间的人际关系。本文提出了一种利用马尔可夫逻辑网络从网络上的电影剧本数据库中估计人物之间人际关系的方法。马尔可夫逻辑网络是一种概率逻辑网络,它可以描述人物之间的关系,这种关系不一定在任何情况下都能得到满足。在实验中,我们证实了我们提出的方法可以估计电影中角色之间的偏好,精度为64.2%。最后,通过与社会指标的比较,我们讨论了电影与现实世界的相关性。
{"title":"Estimation of character diagram from open-movie databases for cultural understanding","authors":"Yuta Ohwatari, Takahiro Kawamura, Y. Sei, Yasuyuki Tahara, Akihiko Ohsuga","doi":"10.1109/ICOSC.2015.7050808","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050808","url":null,"abstract":"In many movies, cultures, social conditions, and awareness of the issues of the times are depicted in any form. Even if fantasy and SF are works far from reality, the stories do mirror the real world. Therefore, we assumed to be able to understand social conditions and cultures of the real world by analyzing the movie. As a way to analyze the film, we decided to estimate the interpersonal relationships between the characters in the movies. In this paper, we propose a method of estimating interpersonal relationships of the characters using Markov Logic Network from movie script databases on the Web. Markov Logic Network is a probabilistic logic network that can describe the relationships between characters, which are not necessarily satisfied on every occasion. In experiments, we confirmed that our proposed method can estimate favors between the characters in a movie with a precision of 64.2%. Finally, by comparing the estimated relationships with social indicators, we discussed the relevance of the movie to the real world.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"46 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125701199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CSP-based adaptation of multimedia document composition 基于csp的多媒体文档改编
Azze-eddine Maredj, Nourredine Tonkin
We propose an approach for the dynamic adaptation of multimedia documents modeled by an over-constrained constraint satisfaction problem (OCSP). In addition the solutions that it provides for the problem of determining the relations that do not comply with the user profile and the problem of the combinatorial explosion when searching for alternative relations, it insures a certain quality of service to the presentation of the adapted document: (i) If the required constraints are not satisfied, no document is generated, unlike other approaches that generates even if the presentation of the adapted document is completely different from the initial one, (ii) The definition of the constraints hierarchy (strong constraints and medium constraints) maintains as much as possible of the initial document relations in the adapted one. As result, the adapted presentations are consistent and close to those of the initial ones.
提出了一种基于过约束约束满足问题(OCSP)的多媒体文档动态适应方法。此外,它还为确定不符合用户配置文件的关系以及搜索替代关系时的组合爆炸问题提供了解决方案,它确保了对改编文档的表示的一定服务质量:(i)如果不满足所需的约束,则不生成文件,这与其他方法不同,即使改编后的文件的表示与初始文件完全不同,也会生成文件。(ii)约束层次的定义(强约束和中等约束)在改编后的文件中尽可能多地保持初始文件关系。因此,改编后的演示文稿与最初的演示文稿一致且接近。
{"title":"CSP-based adaptation of multimedia document composition","authors":"Azze-eddine Maredj, Nourredine Tonkin","doi":"10.1109/ICOSC.2015.7050811","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050811","url":null,"abstract":"We propose an approach for the dynamic adaptation of multimedia documents modeled by an over-constrained constraint satisfaction problem (OCSP). In addition the solutions that it provides for the problem of determining the relations that do not comply with the user profile and the problem of the combinatorial explosion when searching for alternative relations, it insures a certain quality of service to the presentation of the adapted document: (i) If the required constraints are not satisfied, no document is generated, unlike other approaches that generates even if the presentation of the adapted document is completely different from the initial one, (ii) The definition of the constraints hierarchy (strong constraints and medium constraints) maintains as much as possible of the initial document relations in the adapted one. As result, the adapted presentations are consistent and close to those of the initial ones.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128994709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards building a word similarity dictionary for personality bias classification of phishing email contents 构建用于网络钓鱼邮件内容人格偏见分类的词相似度词典
Ke Ding, Nicholas Pantic, You Lu, S. Manna, M. Husain
Phishing attacks are a form of social engineering technique used for stealing private information from users through emails. A general approach for phishing susceptibility analysis is to profile the user's personality using personality models such as the Five Factor Model (FFM) and find out the susceptibility for a set of phishing attempts. The FFM is a personality profiling system that scores participants on five separate personality traits: openness to experience (O), conscientiousness (C), extraversion (E), agreeableness (A), and neuroticism (N). However, existing approaches don't take into account the fact that based on the content, for example, a phishing email offering an enticing free prize might be very effective on a dominant O-personality (curious, open to new experience), but not to an N-personality (tendency of experiencing negative emotion). Therefore, it is necessary to consider the personality bias of the phishing email contents during the susceptibility analysis. In this paper, we have proposed a method to construct a dictionary based on the semantic similarity of prospective words describing the FFM. Words generated through this dictionary can be used to label the phishing emails according to the personality bias and serve as the key component of a personality bias classification system of phishing emails. We have validated our dictionary construction using a large public corpus of phishing email data which shows the potential of the proposed system in anti-phishing research.
网络钓鱼攻击是一种社会工程技术,用于通过电子邮件窃取用户的私人信息。网络钓鱼易感性分析的一般方法是使用人格模型(如五因素模型(FFM))来分析用户的个性,并找出一组网络钓鱼尝试的易感性。FFM是一种性格分析系统,它根据参与者的五种不同的性格特征给他们打分:开放性(O),责任心(C),外向性(E),亲和性(A)和神经质(N)。然而,现有的方法并没有考虑到这样一个事实,即基于内容,例如,提供诱人的免费奖品的网络钓鱼电子邮件可能对占主导地位的O型人格(好奇,乐于接受新体验)非常有效,但对N型人格(倾向于体验负面情绪)就没有效果。因此,在敏感性分析中,有必要考虑网络钓鱼邮件内容的人格偏见。在本文中,我们提出了一种基于前置词的语义相似度来构建描述FFM的词典的方法。通过该词典生成的单词可以根据人格偏见对网络钓鱼邮件进行标记,并作为网络钓鱼邮件人格偏见分类系统的关键组成部分。我们使用大量网络钓鱼电子邮件数据验证了我们的词典构建,这表明了所提出的系统在反网络钓鱼研究中的潜力。
{"title":"Towards building a word similarity dictionary for personality bias classification of phishing email contents","authors":"Ke Ding, Nicholas Pantic, You Lu, S. Manna, M. Husain","doi":"10.1109/ICOSC.2015.7050815","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050815","url":null,"abstract":"Phishing attacks are a form of social engineering technique used for stealing private information from users through emails. A general approach for phishing susceptibility analysis is to profile the user's personality using personality models such as the Five Factor Model (FFM) and find out the susceptibility for a set of phishing attempts. The FFM is a personality profiling system that scores participants on five separate personality traits: openness to experience (O), conscientiousness (C), extraversion (E), agreeableness (A), and neuroticism (N). However, existing approaches don't take into account the fact that based on the content, for example, a phishing email offering an enticing free prize might be very effective on a dominant O-personality (curious, open to new experience), but not to an N-personality (tendency of experiencing negative emotion). Therefore, it is necessary to consider the personality bias of the phishing email contents during the susceptibility analysis. In this paper, we have proposed a method to construct a dictionary based on the semantic similarity of prospective words describing the FFM. Words generated through this dictionary can be used to label the phishing emails according to the personality bias and serve as the key component of a personality bias classification system of phishing emails. We have validated our dictionary construction using a large public corpus of phishing email data which shows the potential of the proposed system in anti-phishing research.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"165 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114182198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An ontological and hierarchical approach for supply chain event aggregation 供应链事件聚合的本体论分层方法
Xing Tan, G. Tayi
Time Petri Nets (TPN) have been applied to model the basic event patterns that arise commonly in supply chains. Additionally these TPN-specified patterns can be aggregated to create more complicated supply chain event systems. In our previous work, meanwhile, we introduced SCOPE (Situation Calculus Ontology for PEtri nets), which semantically describes Petri Nets using the Situation Calculus. In this paper, we show that TESCOPE, which extends SCOPE to incorporate the concept of time, can be naturally applied for supply chain event aggregation. That is, we show that supply-chain event patterns can be easily represented as TESCOPE-based Golog procedures, where Golog is a logic language built on top of the Situation Calculus; We further demonstrate by examples that these basic Golog procedures can be aggregated semantically and hierarchically into complex ones.
时间Petri网(TPN)已被应用于对供应链中常见的基本事件模式进行建模。此外,可以聚合这些tpn指定的模式,以创建更复杂的供应链事件系统。同时,在我们之前的工作中,我们引入了SCOPE (PEtri网的情境演算本体),它使用情境演算在语义上描述了PEtri网。本文证明了将SCOPE扩展到时间概念的TESCOPE可以很自然地应用于供应链事件聚合。也就是说,我们表明供应链事件模式可以很容易地表示为基于tescope的Golog过程,其中Golog是建立在情境演算之上的逻辑语言;我们通过示例进一步证明,这些基本的Golog过程可以在语义上和层次上聚合成复杂的过程。
{"title":"An ontological and hierarchical approach for supply chain event aggregation","authors":"Xing Tan, G. Tayi","doi":"10.1109/ICOSC.2015.7050780","DOIUrl":"https://doi.org/10.1109/ICOSC.2015.7050780","url":null,"abstract":"Time Petri Nets (TPN) have been applied to model the basic event patterns that arise commonly in supply chains. Additionally these TPN-specified patterns can be aggregated to create more complicated supply chain event systems. In our previous work, meanwhile, we introduced SCOPE (Situation Calculus Ontology for PEtri nets), which semantically describes Petri Nets using the Situation Calculus. In this paper, we show that TESCOPE, which extends SCOPE to incorporate the concept of time, can be naturally applied for supply chain event aggregation. That is, we show that supply-chain event patterns can be easily represented as TESCOPE-based Golog procedures, where Golog is a logic language built on top of the Situation Calculus; We further demonstrate by examples that these basic Golog procedures can be aggregated semantically and hierarchically into complex ones.","PeriodicalId":126701,"journal":{"name":"Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128559876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 2015 IEEE 9th International Conference on Semantic Computing (IEEE ICSC 2015)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1