首页 > 最新文献

2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)最新文献

英文 中文
A Design and Implementation of Performance Dashboard for the Work Integrated Learning Unit 工作综合学习单元绩效仪表板的设计与实现
Pathathai Na-Lumpoon, Pree Thiengburanathum
At present, the higher education institutes focus on the learning outcomes that curriculums can produce graduates with qualified employability skills. Work Integrated Learning (WIL) center is a unit in academic faculty that aims to develop students’ competencies at workplace and classroom based on the integrated learning outcomes. However, measuring the performances of the WIL program is a difficult task such that the related key performance indicators are not identified. In this paper, we developed a novel model of the KPIs of WIL activities and the relevant employability skills. Additionally, design and implementation of the dashboards to display and monitor the defined KPIs are presented. As a result, users can gain insightful information and provide confidence for decision makers.
目前,高等教育机构关注的是课程能够培养具有合格就业技能的毕业生的学习成果。工作整合学习(Work Integrated Learning, WIL)中心是学院内的一个单位,旨在以综合学习成果为基础,培养学生在工作场所和课堂上的能力。然而,衡量WIL项目的绩效是一项艰巨的任务,以至于相关的关键绩效指标没有被确定。在本文中,我们开发了一种新的关键绩效指标模型,该模型反映了劳动力劳动活动和相关的就业技能。此外,还介绍了仪表板的设计和实现,以显示和监视已定义的kpi。因此,用户可以获得有洞察力的信息,并为决策者提供信心。
{"title":"A Design and Implementation of Performance Dashboard for the Work Integrated Learning Unit","authors":"Pathathai Na-Lumpoon, Pree Thiengburanathum","doi":"10.1109/SKIMA47702.2019.8982514","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982514","url":null,"abstract":"At present, the higher education institutes focus on the learning outcomes that curriculums can produce graduates with qualified employability skills. Work Integrated Learning (WIL) center is a unit in academic faculty that aims to develop students’ competencies at workplace and classroom based on the integrated learning outcomes. However, measuring the performances of the WIL program is a difficult task such that the related key performance indicators are not identified. In this paper, we developed a novel model of the KPIs of WIL activities and the relevant employability skills. Additionally, design and implementation of the dashboards to display and monitor the defined KPIs are presented. As a result, users can gain insightful information and provide confidence for decision makers.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130986899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning with Convolutional Neural Network and Long Short-Term Memory for Phishing Detection 基于卷积神经网络和长短期记忆的深度学习网络钓鱼检测
Moruf Akin Adebowale, Khin T. Lwin, Mohammed Alamgir Hossain
Phishers sometimes exploit users’ trust of a known website’s appearance by using a similar page that looks like the legitimate site. In recent times, researchers have tried to identify and classify the issues that can contribute to the detection of phishing websites. This study focuses on design and development of a deep learning based phishing detection solution that leverages the Universal Resource Locator and website content such as images and frame elements. A Convolutional Neural Network (CNN) and the Long Short-Term Memory (LSTM) algorithm were used to build a classification model. The experimental results showed that the proposed model achieved an accuracy rate of 93.28%.
钓鱼者有时会利用用户对已知网站外观的信任,使用看起来像合法网站的类似页面。最近,研究人员试图识别和分类可能有助于检测网络钓鱼网站的问题。本研究的重点是基于深度学习的网络钓鱼检测解决方案的设计和开发,该解决方案利用通用资源定位器和网站内容(如图像和框架元素)。采用卷积神经网络(CNN)和长短期记忆(LSTM)算法建立分类模型。实验结果表明,该模型的准确率达到了93.28%。
{"title":"Deep Learning with Convolutional Neural Network and Long Short-Term Memory for Phishing Detection","authors":"Moruf Akin Adebowale, Khin T. Lwin, Mohammed Alamgir Hossain","doi":"10.1109/SKIMA47702.2019.8982427","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982427","url":null,"abstract":"Phishers sometimes exploit users’ trust of a known website’s appearance by using a similar page that looks like the legitimate site. In recent times, researchers have tried to identify and classify the issues that can contribute to the detection of phishing websites. This study focuses on design and development of a deep learning based phishing detection solution that leverages the Universal Resource Locator and website content such as images and frame elements. A Convolutional Neural Network (CNN) and the Long Short-Term Memory (LSTM) algorithm were used to build a classification model. The experimental results showed that the proposed model achieved an accuracy rate of 93.28%.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124718712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
A Simultaneous Approach for Compression and Encryption Techniques Using Deoxyribonucleic Acid 一种同时使用脱氧核糖核酸的压缩和加密技术
D. A. Zebari, H. Haron, D. Zeebaree, A. Zain
The Data Compression is a creative skill which defined scientific concepts of providing contents in a compact form. Thus, it has turned into a need in the field of communication as well as in different scientific studies. Data transmission must be sufficiently secure to be utilized in a channel medium with no misfortune; and altering of information. Encryption is the way toward scrambling an information with the goal that just the known receiver can peruse or see it. Encryption can give methods for anchoring data. Along these lines, the two strategies are the two crucial advances that required for the protected transmission of huge measure of information. In typical cases, the compacted information is encoded and transmitted. In any case, this sequential technique is time consumption and computationally cost. In the present paper, an examination on simultaneous compression and encryption technique depends on DNA which is proposed for various sorts of secret data. In simultaneous technique, both techniques can be done at single step which lessens the time for the whole task. The present work is consisting of two phases. First phase, encodes the plaintext by 6-bits instead of 8-bits, means each character represented by three DNA nucleotides whereas to encode any pixel of image by four DNA nucleotides. This phase can compress the plaintext by 25% of the original text. Second phase, compression and encryption has been done at the same time. Both types of data have been compressed by their half size as well as encrypted the generated symmetric key. Thus, this technique is more secure against intruders. Experimental results show a better performance of the proposed scheme compared with standard compression techniques.
数据压缩是一种创造性的技能,它定义了以紧凑形式提供内容的科学概念。因此,它已经成为传播学领域以及不同科学研究领域的一种需要。数据传输必须足够安全,以便在信道介质中使用而不会发生不幸;以及信息的改变。加密是一种将信息打乱的方法,目的是只有已知的接收者才能阅读或看到它。加密可以提供锚定数据的方法。沿着这些思路,这两种策略是保护大量信息传输所需的两个关键进步。在典型情况下,对压缩后的信息进行编码和传输。在任何情况下,这种顺序技术都是耗时和计算成本。本文研究了一种基于DNA的同时压缩和加密技术,该技术适用于各种类型的秘密数据。在同步技术中,两种技术可以在一个步骤中完成,从而减少了整个任务的时间。目前的工作分为两个阶段。第一阶段,用6位而不是8位编码明文,这意味着每个字符由三个DNA核苷酸表示,而用四个DNA核苷酸编码图像的任何像素。这个阶段可以将明文压缩为原始文本的25%。第二阶段,压缩和加密同时进行。这两种类型的数据都被压缩了一半大小,并对生成的对称密钥进行了加密。因此,这种技术对入侵者更安全。实验结果表明,与标准压缩技术相比,该方案具有更好的性能。
{"title":"A Simultaneous Approach for Compression and Encryption Techniques Using Deoxyribonucleic Acid","authors":"D. A. Zebari, H. Haron, D. Zeebaree, A. Zain","doi":"10.1109/SKIMA47702.2019.8982392","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982392","url":null,"abstract":"The Data Compression is a creative skill which defined scientific concepts of providing contents in a compact form. Thus, it has turned into a need in the field of communication as well as in different scientific studies. Data transmission must be sufficiently secure to be utilized in a channel medium with no misfortune; and altering of information. Encryption is the way toward scrambling an information with the goal that just the known receiver can peruse or see it. Encryption can give methods for anchoring data. Along these lines, the two strategies are the two crucial advances that required for the protected transmission of huge measure of information. In typical cases, the compacted information is encoded and transmitted. In any case, this sequential technique is time consumption and computationally cost. In the present paper, an examination on simultaneous compression and encryption technique depends on DNA which is proposed for various sorts of secret data. In simultaneous technique, both techniques can be done at single step which lessens the time for the whole task. The present work is consisting of two phases. First phase, encodes the plaintext by 6-bits instead of 8-bits, means each character represented by three DNA nucleotides whereas to encode any pixel of image by four DNA nucleotides. This phase can compress the plaintext by 25% of the original text. Second phase, compression and encryption has been done at the same time. Both types of data have been compressed by their half size as well as encrypted the generated symmetric key. Thus, this technique is more secure against intruders. Experimental results show a better performance of the proposed scheme compared with standard compression techniques.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126884938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Towards Dynamic Fit Assessment for Strategic Alignment using Enterprise Architecture Models 利用企业架构模型实现战略一致性的动态契合评估
Dóra Ori, Z. Szabó
Strategic alignment is a complex coalignment process of strategy, organization, IT and management. Enterprise architecture model describes the fundamental structure of a system, including its components and their relationships, providing a holistic view that integrates business and technology domain. The goal of this paper is to discuss enterprise architecture management (EAM) based opportunities for supporting strategic alignment process, and to provide a systematic review of available methods and analysing tools. Strategic alignment process has four phases that can be described by the combination of EAM components. Existing EAM-based tools, methods and new EA model-based analysing approaches can be directly used in the dynamic alignment process, by discovering problems and opportunities.
战略协调是战略、组织、信息技术和管理的复杂协调过程。企业架构模型描述系统的基本结构,包括其组件及其关系,提供集成业务和技术领域的整体视图。本文的目标是讨论基于企业架构管理(EAM)的机会,以支持战略校准过程,并提供对可用方法和分析工具的系统回顾。战略对齐过程有四个阶段,可以通过EAM组件的组合来描述。现有的基于eam的工具、方法和新的基于EA模型的分析方法可以通过发现问题和机会直接用于动态校准过程。
{"title":"Towards Dynamic Fit Assessment for Strategic Alignment using Enterprise Architecture Models","authors":"Dóra Ori, Z. Szabó","doi":"10.1109/SKIMA47702.2019.8982388","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982388","url":null,"abstract":"Strategic alignment is a complex coalignment process of strategy, organization, IT and management. Enterprise architecture model describes the fundamental structure of a system, including its components and their relationships, providing a holistic view that integrates business and technology domain. The goal of this paper is to discuss enterprise architecture management (EAM) based opportunities for supporting strategic alignment process, and to provide a systematic review of available methods and analysing tools. Strategic alignment process has four phases that can be described by the combination of EAM components. Existing EAM-based tools, methods and new EA model-based analysing approaches can be directly used in the dynamic alignment process, by discovering problems and opportunities.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121600397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automatic cluster-based approach for depth estimation of single 2D images 一种基于自动聚类的二维图像深度估计方法
Muhammad Awais Shoukat, Allah Bux Sargano, Z. Habib, L. You
In this paper, the problem of single 2D image depth estimation is considered. This is a very important problem due to its various applications in the industry. Previous learning-based methods are based on a key assumption that color images having photometric resemblance are likely to present similar depth structure. However, these methods search the whole dataset for finding corresponding images using handcrafted features, which is quite cumbersome and inefficient process. To overcome this, we have proposed a clustering-based algorithm for depth estimation of a single 2D image using transfer learning. To realize this, images are categorized into clusters using K-means clustering algorithm and features are extracted through a pre-trained deep learning model i.e., ResNet-50. After clustering, an efficient step of replacing feature vector is embedded to speedup the process without compromising on accuracy. After then, images with similar structure as an input image, are retrieved from the best matched cluster based on their correlation values. Then, retrieved candidate depth images are employed to initialize prior depth of a query image using weighted-correlation-average (WCA). Finally, the estimated depth is improved by removing variations using cross-bilateral-filter. In order to evaluate the performance of proposed algorithm, experiments are conducted on two benchmark datasets, NYU v2 and Make3D.
本文研究了单幅二维图像的深度估计问题。由于其在工业中的各种应用,这是一个非常重要的问题。以前的基于学习的方法是基于一个关键的假设,即具有光度相似性的彩色图像可能呈现相似的深度结构。然而,这些方法在整个数据集中使用手工制作的特征来寻找相应的图像,这是一个非常繁琐和低效的过程。为了克服这个问题,我们提出了一种基于聚类的算法,用于使用迁移学习对单个2D图像进行深度估计。为了实现这一点,使用K-means聚类算法将图像分类到聚类中,并通过预训练的深度学习模型(即ResNet-50)提取特征。在聚类后,嵌入了替换特征向量的有效步骤,在不影响精度的前提下加快聚类速度。然后,根据图像的相关值,从最匹配的聚类中检索与输入图像结构相似的图像。然后,利用检索到的候选深度图像使用加权相关平均(WCA)初始化查询图像的先验深度。最后,利用交叉双边滤波去除变化,提高了估计深度。为了评估算法的性能,在NYU v2和Make3D两个基准数据集上进行了实验。
{"title":"An automatic cluster-based approach for depth estimation of single 2D images","authors":"Muhammad Awais Shoukat, Allah Bux Sargano, Z. Habib, L. You","doi":"10.1109/SKIMA47702.2019.8982472","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982472","url":null,"abstract":"In this paper, the problem of single 2D image depth estimation is considered. This is a very important problem due to its various applications in the industry. Previous learning-based methods are based on a key assumption that color images having photometric resemblance are likely to present similar depth structure. However, these methods search the whole dataset for finding corresponding images using handcrafted features, which is quite cumbersome and inefficient process. To overcome this, we have proposed a clustering-based algorithm for depth estimation of a single 2D image using transfer learning. To realize this, images are categorized into clusters using K-means clustering algorithm and features are extracted through a pre-trained deep learning model i.e., ResNet-50. After clustering, an efficient step of replacing feature vector is embedded to speedup the process without compromising on accuracy. After then, images with similar structure as an input image, are retrieved from the best matched cluster based on their correlation values. Then, retrieved candidate depth images are employed to initialize prior depth of a query image using weighted-correlation-average (WCA). Finally, the estimated depth is improved by removing variations using cross-bilateral-filter. In order to evaluate the performance of proposed algorithm, experiments are conducted on two benchmark datasets, NYU v2 and Make3D.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129146773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Differentially Private Matrix Factorization For Recommendations 推荐的局部微分私有矩阵分解
N. Jeyamohan, Xiaomin Chen, N. Aslam
In recent years recommendation systems have become popular in the e-commerce industry as they can be used to provide a personalized experience to users. However, performing analytics on users’ information has also raised privacy concerns. Various privacy protection mechanisms have been proposed for recommendation systems against user-side adversaries. However most of them disregards the privacy violations caused by the service providers. In this paper, we propose a local differential privacy mechanism for matrix factorization based recommendation systems. In our mechanism, users perturb their ratings locally on their devices using Laplace and randomized response mechanisms and send the perturbed ratings to the service provider. We evaluate the proposed mechanism using Movielens dataset and demonstrate that it can achieve a satisfactory tradeoff between data utility and user privacy.
近年来,推荐系统在电子商务行业变得流行起来,因为它们可以用来为用户提供个性化的体验。然而,对用户信息进行分析也引发了对隐私的担忧。针对用户端的对手,已经提出了各种针对推荐系统的隐私保护机制。然而,他们大多无视服务提供商造成的隐私侵犯。本文提出了一种基于矩阵分解的推荐系统的局部差分隐私机制。在我们的机制中,用户在他们的设备上使用拉普拉斯和随机响应机制扰动他们的评级,并将扰动后的评级发送给服务提供商。我们使用Movielens数据集评估了所提出的机制,并证明它可以在数据效用和用户隐私之间实现令人满意的权衡。
{"title":"Local Differentially Private Matrix Factorization For Recommendations","authors":"N. Jeyamohan, Xiaomin Chen, N. Aslam","doi":"10.1109/SKIMA47702.2019.8982536","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982536","url":null,"abstract":"In recent years recommendation systems have become popular in the e-commerce industry as they can be used to provide a personalized experience to users. However, performing analytics on users’ information has also raised privacy concerns. Various privacy protection mechanisms have been proposed for recommendation systems against user-side adversaries. However most of them disregards the privacy violations caused by the service providers. In this paper, we propose a local differential privacy mechanism for matrix factorization based recommendation systems. In our mechanism, users perturb their ratings locally on their devices using Laplace and randomized response mechanisms and send the perturbed ratings to the service provider. We evaluate the proposed mechanism using Movielens dataset and demonstrate that it can achieve a satisfactory tradeoff between data utility and user privacy.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131858760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Partitioning based incremental marginalization algorithm for anonymizing missing data streams 基于分区的缺失数据流匿名化增量边缘化算法
Ankhbayar Otgonbayar, Zeeshan Pervez, K. Dahal
The IoT and its applications are the inseparable part of modern world. IoT is expanding into every corner of the world where internet is available. IoT data streams are utilized by many organizations for research and business. To benefit from these data streams, the data handling party must secure the individuals’ privacy. The most common privacy preservation approach is data anonymization. However, IoT data provides missing data streams due to the varying device pool and preferences of individuals and unpredicted devices’ malfunctions of IoT. Minimization of missingess and information loss is very important for anonymizing of missing data streams. To achieve this, we introduce IncrementalPBM (Incremental Partitioning Based Marginalization) for anonymizing missig data streams. IncrementalPBM utilizes time based sliding window for missing data stream anonymization, and it aims to control the number of QIDs for anonymization while increasing the number of tuples for anonymization. Our experiment on real dataset showed IncrementalPBM is effective and efficient for anonymizing missing data streams compared to existing missing data stream anonymization algorithm. IncrementalPBM showed significant improvement; 5% to 9% less information loss, 4500 to 6000 more number of re-use anonymization while showing comparable clustering, suppression and runtime.
物联网及其应用是现代世界不可分割的一部分。物联网正在扩展到世界上每一个可以使用互联网的角落。物联网数据流被许多组织用于研究和业务。为了从这些数据流中获益,数据处理方必须保护个人隐私。最常见的隐私保护方法是数据匿名化。然而,由于不同的设备池和个人偏好以及不可预测的设备物联网故障,物联网数据提供了缺失的数据流。丢失和信息丢失的最小化对于丢失数据流的匿名化是非常重要的。为了实现这一点,我们引入了IncrementalPBM(基于增量分区的边缘化)来匿名丢失的数据流。IncrementalPBM利用基于时间的滑动窗口进行缺失数据流匿名化,其目的是在增加匿名元组数量的同时控制匿名化qid的数量。在真实数据集上的实验表明,与现有的缺失数据流匿名化算法相比,IncrementalPBM对缺失数据流的匿名化是有效的。增量pbm表现出显著的改善;减少了5%到9%的信息丢失,重用匿名化的次数增加了4500到6000,同时显示出相当的聚类、抑制和运行时间。
{"title":"Partitioning based incremental marginalization algorithm for anonymizing missing data streams","authors":"Ankhbayar Otgonbayar, Zeeshan Pervez, K. Dahal","doi":"10.1109/SKIMA47702.2019.8982399","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982399","url":null,"abstract":"The IoT and its applications are the inseparable part of modern world. IoT is expanding into every corner of the world where internet is available. IoT data streams are utilized by many organizations for research and business. To benefit from these data streams, the data handling party must secure the individuals’ privacy. The most common privacy preservation approach is data anonymization. However, IoT data provides missing data streams due to the varying device pool and preferences of individuals and unpredicted devices’ malfunctions of IoT. Minimization of missingess and information loss is very important for anonymizing of missing data streams. To achieve this, we introduce IncrementalPBM (Incremental Partitioning Based Marginalization) for anonymizing missig data streams. IncrementalPBM utilizes time based sliding window for missing data stream anonymization, and it aims to control the number of QIDs for anonymization while increasing the number of tuples for anonymization. Our experiment on real dataset showed IncrementalPBM is effective and efficient for anonymizing missing data streams compared to existing missing data stream anonymization algorithm. IncrementalPBM showed significant improvement; 5% to 9% less information loss, 4500 to 6000 more number of re-use anonymization while showing comparable clustering, suppression and runtime.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114656416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On the Solution of Poisson’s Equation using Deep Learning 用深度学习求解泊松方程
Riya Aggarwal, H. Ugail
We devise a numerical method for solving the Poisson’s equation using a convolutional neural network architecture, otherwise known as deep learning. The method we have employed here uses both feedforward neural systems and backpropagation to set up a framework for achieving the numerical solutions of the elliptic partial differential equations - more superficially the Poisson’s equation. Our deep learning framework has two substantial entities. The first part of the network enables to fulfill the necessary boundary conditions of the Poisson’s equation while the second part consisting of a feedforward neural system containing flexible parameters or weights gives rise to the solution. We have compared the solutions of the Poisson’s equation arising from our deep learning framework subject to various boundary conditions with the corresponding analytic solutions. As a result, we have found that our deep learning framework can obtain solutions which are accurate as well as efficient.
我们设计了一种使用卷积神经网络架构(也称为深度学习)求解泊松方程的数值方法。我们在这里采用的方法使用前馈神经系统和反向传播来建立一个框架,以实现椭圆型偏微分方程的数值解-更表面的泊松方程。我们的深度学习框架有两个实体。网络的第一部分能够满足泊松方程的必要边界条件,而由包含柔性参数或权值的前馈神经系统组成的第二部分给出解。我们将深度学习框架在不同边界条件下产生的泊松方程的解与相应的解析解进行了比较。因此,我们发现我们的深度学习框架可以获得准确而高效的解决方案。
{"title":"On the Solution of Poisson’s Equation using Deep Learning","authors":"Riya Aggarwal, H. Ugail","doi":"10.1109/SKIMA47702.2019.8982518","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982518","url":null,"abstract":"We devise a numerical method for solving the Poisson’s equation using a convolutional neural network architecture, otherwise known as deep learning. The method we have employed here uses both feedforward neural systems and backpropagation to set up a framework for achieving the numerical solutions of the elliptic partial differential equations - more superficially the Poisson’s equation. Our deep learning framework has two substantial entities. The first part of the network enables to fulfill the necessary boundary conditions of the Poisson’s equation while the second part consisting of a feedforward neural system containing flexible parameters or weights gives rise to the solution. We have compared the solutions of the Poisson’s equation arising from our deep learning framework subject to various boundary conditions with the corresponding analytic solutions. As a result, we have found that our deep learning framework can obtain solutions which are accurate as well as efficient.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122547328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Deep Learning Approach to Tumour Identification in Fresh Frozen Tissues 新鲜冷冻组织肿瘤识别的深度学习方法
H. Ugail, Maisun Alzorgani, A. M. Bukar, Humera Hussain, Christopher Burn, Thinzar Min Sein, S. Betmouni
The demand for pathology services are significantly increasing whilst the numbers of pathologists are significantly decreasing. In order to overcome these challenges, a growing interest in faster and efficient diagnostic methods such as computer-aided diagnosis (CAD) have been observed. An increase in the use of CAD systems in clinical settings has subsequently led to a growing interest in machine learning. In this paper, we show the use of machine learning algorithms in the prediction of tumour content in Fresh Frozen (FF) histological samples of head and neck. More specifically, we explore a pre-trained convolutional neural network (CNN), namely the AlexNet, to build two common machine learning classifiers. For the first classifier, the pre-trained AlexNet network is used to extract features from the activation layer and then Support Vector Machine (SVM) based classifier is trained by using these extracted features. In the second case, we replace the last three layers of the pre-trained AlexNet network and then fine tune these layers on the FF histological image samples. The results of our experiments are very promising. We have obtained percentage classification rates in the high 90s, and our results show there is little difference between SVM and transfer learning. Thus, the present study show that an AlexNet driven CNN with SVM and fine-tuned classifiers are a suitable choice for accurate discrimination between tumour and non-tumour histological samples from the head and neck.
对病理服务的需求正在显著增加,而病理学家的数量却在显著减少。为了克服这些挑战,人们对计算机辅助诊断(CAD)等更快、更有效的诊断方法越来越感兴趣。临床环境中CAD系统使用的增加随后导致了对机器学习日益增长的兴趣。在本文中,我们展示了机器学习算法在预测新鲜冷冻(FF)头颈部组织学样本中肿瘤含量的使用。更具体地说,我们探索了一个预训练的卷积神经网络(CNN),即AlexNet,以构建两个常见的机器学习分类器。对于第一个分类器,使用预训练好的AlexNet网络从激活层提取特征,然后使用提取的特征训练基于支持向量机(SVM)的分类器。在第二种情况下,我们替换预训练的AlexNet网络的最后三层,然后在FF组织学图像样本上微调这些层。我们的实验结果很有希望。我们已经获得了90%以上的百分比分类率,我们的结果表明SVM和迁移学习之间的差异很小。因此,本研究表明,AlexNet驱动的CNN与支持向量机和微调分类器是准确区分头颈部肿瘤和非肿瘤组织学样本的合适选择。
{"title":"A Deep Learning Approach to Tumour Identification in Fresh Frozen Tissues","authors":"H. Ugail, Maisun Alzorgani, A. M. Bukar, Humera Hussain, Christopher Burn, Thinzar Min Sein, S. Betmouni","doi":"10.1109/SKIMA47702.2019.8982508","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982508","url":null,"abstract":"The demand for pathology services are significantly increasing whilst the numbers of pathologists are significantly decreasing. In order to overcome these challenges, a growing interest in faster and efficient diagnostic methods such as computer-aided diagnosis (CAD) have been observed. An increase in the use of CAD systems in clinical settings has subsequently led to a growing interest in machine learning. In this paper, we show the use of machine learning algorithms in the prediction of tumour content in Fresh Frozen (FF) histological samples of head and neck. More specifically, we explore a pre-trained convolutional neural network (CNN), namely the AlexNet, to build two common machine learning classifiers. For the first classifier, the pre-trained AlexNet network is used to extract features from the activation layer and then Support Vector Machine (SVM) based classifier is trained by using these extracted features. In the second case, we replace the last three layers of the pre-trained AlexNet network and then fine tune these layers on the FF histological image samples. The results of our experiments are very promising. We have obtained percentage classification rates in the high 90s, and our results show there is little difference between SVM and transfer learning. Thus, the present study show that an AlexNet driven CNN with SVM and fine-tuned classifiers are a suitable choice for accurate discrimination between tumour and non-tumour histological samples from the head and neck.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114933728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
IoT Based Remote Medical Diagnosis System Using NodeMCU 基于物联网的NodeMCU远程医疗诊断系统
Fahim Faisal, S. A. Hossain
Internet of Things (IoT) can help us better our lives in many ways by rendering real time information over the internet collected from a smart network of devices. In this paper, we have discussed about Remote Medical Diagnosis System (RMDS), which can come to aid for humans in many life-threatening situations. Often, people fall sick in locations where there are no hospitals or healthcare facility nearby, in such cases people sometimes even die due to lack of proper treatment and diagnosis. In rural areas of third-world countries, this problem is even more intense. For demonstration purpose, heartrate and body temperature of a person is determined and rendered over the internet. Health data is uploaded in real-time and can be viewed through a web browser. The aim of RMDS is to remotely provide health information of a patient to a healthcare professional in life-threatening situations. It can also be used for remote patient monitoring of regular patients of a doctor.
物联网(IoT)通过在互联网上呈现从智能设备网络收集的实时信息,可以在许多方面帮助我们改善生活。在本文中,我们讨论了远程医疗诊断系统(RMDS),它可以在许多危及生命的情况下帮助人类。人们常常在附近没有医院或保健设施的地方生病,在这种情况下,人们有时甚至因缺乏适当的治疗和诊断而死亡。在第三世界国家的农村地区,这个问题更加严重。为了演示的目的,一个人的心率和体温是在互联网上测定和呈现的。健康数据实时上传,可通过网络浏览器查看。RMDS的目的是在危及生命的情况下远程向医疗保健专业人员提供患者的健康信息。它也可以用于远程病人监测的普通病人的医生。
{"title":"IoT Based Remote Medical Diagnosis System Using NodeMCU","authors":"Fahim Faisal, S. A. Hossain","doi":"10.1109/SKIMA47702.2019.8982509","DOIUrl":"https://doi.org/10.1109/SKIMA47702.2019.8982509","url":null,"abstract":"Internet of Things (IoT) can help us better our lives in many ways by rendering real time information over the internet collected from a smart network of devices. In this paper, we have discussed about Remote Medical Diagnosis System (RMDS), which can come to aid for humans in many life-threatening situations. Often, people fall sick in locations where there are no hospitals or healthcare facility nearby, in such cases people sometimes even die due to lack of proper treatment and diagnosis. In rural areas of third-world countries, this problem is even more intense. For demonstration purpose, heartrate and body temperature of a person is determined and rendered over the internet. Health data is uploaded in real-time and can be viewed through a web browser. The aim of RMDS is to remotely provide health information of a patient to a healthcare professional in life-threatening situations. It can also be used for remote patient monitoring of regular patients of a doctor.","PeriodicalId":245523,"journal":{"name":"2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127034632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 13th International Conference on Software, Knowledge, Information Management and Applications (SKIMA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1