Here during this Paper, Eye chase is employed in studies to look at education and learning processes. Additionally, lecture rooms and labs square measure being equipped with this technology so as to show tomorrow's work force the way to use eye chase in several fields. From our company shoppers, our team hears that there's a growing demand for those who square measure consultants all told aspects of eye chase. Universities have a singular chance to fulfill this growing demand by militarization students with eye chase tools. This expands their future choices by teaching them however visual attention may be analyzed and applied to several analysis fields. In order to relinquish each the university and also the students a a lot of competitive come on the market place, we are able to facilitate develop curriculums that demonstrate for college kids however eye chase may be integrated as a tool to answer analysis queries, solve business problems, and even build businesses. Eye chase will capture attention-grabbing insights regarding student learning behavior and teaching strategies in a very broad vary of academic things. The info from an eye fixed hunter will reveal completely different learning ways for researchers to higher perceive student psychological feature employment. This data can even be accustomed measure teacher performance and instruction strategies. By understanding the room dynamics, as well as interaction between students and academics, researchers will outline acceptable coaching programs and directions so as to boost education, Eye chase shows the immediate reactions of users, additionally because the distribution of their attention in AN interface. Testing code and applications throughout development is vital to making sure they're effective for the user.
{"title":"Intelligent Learning Environment Model for Education","authors":"Tarun Dhar Diwan, Upasana Sinha, K. Mehta","doi":"10.1109/IACC.2017.0167","DOIUrl":"https://doi.org/10.1109/IACC.2017.0167","url":null,"abstract":"Here during this Paper, Eye chase is employed in studies to look at education and learning processes. Additionally, lecture rooms and labs square measure being equipped with this technology so as to show tomorrow's work force the way to use eye chase in several fields. From our company shoppers, our team hears that there's a growing demand for those who square measure consultants all told aspects of eye chase. Universities have a singular chance to fulfill this growing demand by militarization students with eye chase tools. This expands their future choices by teaching them however visual attention may be analyzed and applied to several analysis fields. In order to relinquish each the university and also the students a a lot of competitive come on the market place, we are able to facilitate develop curriculums that demonstrate for college kids however eye chase may be integrated as a tool to answer analysis queries, solve business problems, and even build businesses. Eye chase will capture attention-grabbing insights regarding student learning behavior and teaching strategies in a very broad vary of academic things. The info from an eye fixed hunter will reveal completely different learning ways for researchers to higher perceive student psychological feature employment. This data can even be accustomed measure teacher performance and instruction strategies. By understanding the room dynamics, as well as interaction between students and academics, researchers will outline acceptable coaching programs and directions so as to boost education, Eye chase shows the immediate reactions of users, additionally because the distribution of their attention in AN interface. Testing code and applications throughout development is vital to making sure they're effective for the user.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"99 1-4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125984857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Requirement prioritization is very useful for making decisions about product plan but most of the time it is ignored. In many cases it seems that the product hardly attains its principal objectives due to improper prioritization. Increased emphasis on requirement prioritization and highly dynamic requirements makes management of composite services time consuming and difficult task. When software project has rigid timelines, limited resources, but high client expectations, an instantaneous deployment of most vital and critical features becomes mandatory. The problem can be solved by prioritizing the requirements. Over the past years, various techniques for requirement prioritization are presented by a variety of researchers in software engineering domain. The proposed Adaptive Fuzzy Hierarchical Cumulative Voting (AFHCV) uses adaptive mechanism with existing Fuzzy Hierarchical Cumulative Voting (FHCV) technique, in order to increase the coverage of events that can occur at runtime. The adaptive mechanism includes Addition of new requirement set, Analysis and Reallocation of requirements, Assignment and Alteration of priorities and Re-prioritization. The re-prioritization is used to improve the results of proposed AFHCV. The proposed system compares the results of proposed AFHCV technique to the existing FHCV technique and the comparison shows the proposed AFHCV yields better results than FHCV.
{"title":"Requirement Prioritization Using Adaptive Fuzzy Hierarchical Cumulative Voting","authors":"Bhagyashri B. Jawale, G. Patnaik, A. T. Bhole","doi":"10.1109/IACC.2017.0034","DOIUrl":"https://doi.org/10.1109/IACC.2017.0034","url":null,"abstract":"Requirement prioritization is very useful for making decisions about product plan but most of the time it is ignored. In many cases it seems that the product hardly attains its principal objectives due to improper prioritization. Increased emphasis on requirement prioritization and highly dynamic requirements makes management of composite services time consuming and difficult task. When software project has rigid timelines, limited resources, but high client expectations, an instantaneous deployment of most vital and critical features becomes mandatory. The problem can be solved by prioritizing the requirements. Over the past years, various techniques for requirement prioritization are presented by a variety of researchers in software engineering domain. The proposed Adaptive Fuzzy Hierarchical Cumulative Voting (AFHCV) uses adaptive mechanism with existing Fuzzy Hierarchical Cumulative Voting (FHCV) technique, in order to increase the coverage of events that can occur at runtime. The adaptive mechanism includes Addition of new requirement set, Analysis and Reallocation of requirements, Assignment and Alteration of priorities and Re-prioritization. The re-prioritization is used to improve the results of proposed AFHCV. The proposed system compares the results of proposed AFHCV technique to the existing FHCV technique and the comparison shows the proposed AFHCV yields better results than FHCV.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"208 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132519517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research, linear regression (ordinary least square and principal component) and non-linear regression (standard and least square support vector) models are developed for prediction of output quality from sulphur recovery unit. The hyper parameters associated with standard SVR and LS-SVR are determined analytically using the guidelines proposed in the literature. The relevant input-output data for process variables are taken from open source literature. The training set and validation set are statistically designed from the total data. The designed training data were used for design of the process model and the remaining validation data were used for model performance evaluation. Simulation results show superior performance of the standard SVR model over other models.
{"title":"Inferential Sensing of Output Quality in Petroleum Refinery Using Principal Component Regression and Support Vector Regression","authors":"V. Jain, P. Kishore, R. Kumar, A. K. Pani","doi":"10.1109/IACC.2017.0101","DOIUrl":"https://doi.org/10.1109/IACC.2017.0101","url":null,"abstract":"In this research, linear regression (ordinary least square and principal component) and non-linear regression (standard and least square support vector) models are developed for prediction of output quality from sulphur recovery unit. The hyper parameters associated with standard SVR and LS-SVR are determined analytically using the guidelines proposed in the literature. The relevant input-output data for process variables are taken from open source literature. The training set and validation set are statistically designed from the total data. The designed training data were used for design of the process model and the remaining validation data were used for model performance evaluation. Simulation results show superior performance of the standard SVR model over other models.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130442626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spams are one of the major problems for the quality of Internet services, specially in the electronic mail. Classifying emails into spam and ham category without any misclassification is the concerned area of study. The objective is to find the best feature set for spam email filtering. For this work to be carried out, four categories of features are extracted. That are Bag-of-Word (BoW)s, Bigram Bag-of-Word (BoW)s, PoS Tag and Bigram PoS Tag. Rare features are eliminated based on Naive Bayes score. We chose Information Gain as feature selection technique and constructed Feature occurrence matrix, which is weighted by Term frequency-Inverse document frequency (TF-IDF) values. Singular Value Decomposition used as matrix factorization technique. AdaBoostJ48, Random Forest and Popular linear Support Vector Machine (SVM), called Sequential Minimal Optimization (SMO) are used as classifiers for model generation. The experiments are carried out on individual feature models as well as ensemble models. High ROC of 1 and low FPR of 0 were obtained for both individual feature model and ensemble model.
{"title":"Efficient Feature Set for Spam Email Filtering","authors":"Reshma Varghese, K. Dhanya","doi":"10.1109/IACC.2017.0152","DOIUrl":"https://doi.org/10.1109/IACC.2017.0152","url":null,"abstract":"Spams are one of the major problems for the quality of Internet services, specially in the electronic mail. Classifying emails into spam and ham category without any misclassification is the concerned area of study. The objective is to find the best feature set for spam email filtering. For this work to be carried out, four categories of features are extracted. That are Bag-of-Word (BoW)s, Bigram Bag-of-Word (BoW)s, PoS Tag and Bigram PoS Tag. Rare features are eliminated based on Naive Bayes score. We chose Information Gain as feature selection technique and constructed Feature occurrence matrix, which is weighted by Term frequency-Inverse document frequency (TF-IDF) values. Singular Value Decomposition used as matrix factorization technique. AdaBoostJ48, Random Forest and Popular linear Support Vector Machine (SVM), called Sequential Minimal Optimization (SMO) are used as classifiers for model generation. The experiments are carried out on individual feature models as well as ensemble models. High ROC of 1 and low FPR of 0 were obtained for both individual feature model and ensemble model.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121576631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Very huge quantity of data is continuously generated from a variety of different sources such as IT industries, internet applications, hospital history records, social media feeds etc. called as "Big Data". Mostly Data mining algorithms find the interesting patterns of data from the value-based database where the information is exact. It is not so easy to discover interesting patterns from big data. To abstain from squandering a ton of space & time in searching down frequent item uncertain big data, proposed approach permits clients to show their enthusiasm for terms of succinct anti-monotone constraint. MapReduce technique is used to mine frequent patterns. Two sets of map and reduce functions are used by proposed system to mine valid singleton and non-singleton patterns. In proposed work, UF-tree algorithm generates tree structure of dataset and UF-growth mines frequent itemsets recursively. To further reduce the search space and execution time in uncertain big data, proposed work gives importance to the frequency of items using weighting factors, and calculate expected support of item on the basis of weight. It reduces the nodes in the first level of tree, which leads to a reduction in the size of the tree and execution time.
{"title":"Existential Probability Weighting Strategy to Reduce Search Space & Time for Big Data Mining","authors":"Mahesh Shinde, K. Adhiya","doi":"10.1109/IACC.2017.0035","DOIUrl":"https://doi.org/10.1109/IACC.2017.0035","url":null,"abstract":"Very huge quantity of data is continuously generated from a variety of different sources such as IT industries, internet applications, hospital history records, social media feeds etc. called as \"Big Data\". Mostly Data mining algorithms find the interesting patterns of data from the value-based database where the information is exact. It is not so easy to discover interesting patterns from big data. To abstain from squandering a ton of space & time in searching down frequent item uncertain big data, proposed approach permits clients to show their enthusiasm for terms of succinct anti-monotone constraint. MapReduce technique is used to mine frequent patterns. Two sets of map and reduce functions are used by proposed system to mine valid singleton and non-singleton patterns. In proposed work, UF-tree algorithm generates tree structure of dataset and UF-growth mines frequent itemsets recursively. To further reduce the search space and execution time in uncertain big data, proposed work gives importance to the frequency of items using weighting factors, and calculate expected support of item on the basis of weight. It reduces the nodes in the first level of tree, which leads to a reduction in the size of the tree and execution time.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126584207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is well known that besides the QoS, the potency for energy is additionally a new key technology in coming up for estimating broadband wireless mobile communications. The associated energy potency method is analyzed first for MIMO-OFDM technologies in the transmission system of mobile for applied mathematics. Nature of administration issues utilizing the channel framework SVD system for subchannels unit arranged by their attributes of the channels. Besides, the multi-direct joint change drawback in ordinary MIMO-OFDM correspondence frameworks is redesigned into a multi-target single channel change drawback by gathering all sub channels. In this manner, a shut frame arrangement of the vitality intensity change springs for MIMO-OFDM versatile Multimedia correspondence frameworks. As an outcome, relate vitality effectiveness improved power allotment (EEOPA) algorithmic administer is anticipated to upgrade the vitality strength of MIMO-OFDM versatile transmission correspondence frameworks. Recreation examinations accept that the anticipated EEOPA algorithmic lead will ensure the predefined QoS with high vitality strength in MIMO-OFDM portable transmission correspondence frameworks.
{"title":"A Novel Method for Efficient Optimization of Energy for LTE with QoS Constraints","authors":"Owk. Srinivasulu, P. R. Kumar","doi":"10.1109/IACC.2017.0067","DOIUrl":"https://doi.org/10.1109/IACC.2017.0067","url":null,"abstract":"It is well known that besides the QoS, the potency for energy is additionally a new key technology in coming up for estimating broadband wireless mobile communications. The associated energy potency method is analyzed first for MIMO-OFDM technologies in the transmission system of mobile for applied mathematics. Nature of administration issues utilizing the channel framework SVD system for subchannels unit arranged by their attributes of the channels. Besides, the multi-direct joint change drawback in ordinary MIMO-OFDM correspondence frameworks is redesigned into a multi-target single channel change drawback by gathering all sub channels. In this manner, a shut frame arrangement of the vitality intensity change springs for MIMO-OFDM versatile Multimedia correspondence frameworks. As an outcome, relate vitality effectiveness improved power allotment (EEOPA) algorithmic administer is anticipated to upgrade the vitality strength of MIMO-OFDM versatile transmission correspondence frameworks. Recreation examinations accept that the anticipated EEOPA algorithmic lead will ensure the predefined QoS with high vitality strength in MIMO-OFDM portable transmission correspondence frameworks.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122460017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The implementation of MIMO with OFDM is an effective and more attractive technique for high data rate transmission and provides burly reliability in wireless communication. It has lot of advantages which can decrease receiver complexity, provides heftiness against narrowband interference and have capability to reduce multipath fading. The major problem of MIMO-OFDM is high PAPR which leads to reduction in Signal to Quantization Noise Ratio of the converters which also degrades the efficiency of power amplifier at transmitter. In this paper we mainly focus on one of scrambling and non scrambling technique Iterative clipping and filtering, and partial Transmit sequence (PTS) which results in better performance. The two techniques once united or combined in the system prove that along with trimming down the PAPR value, the power spectral density also gets smoother.
{"title":"Improvisation in BER and PAPR by Using Hybrid Reduction Techniques in MIMO-OFDM Employing Channel Estimation Techniques","authors":"Ashna Kakkar, S. N. Garsha, O. Jain, Kritika","doi":"10.1109/IACC.2017.0047","DOIUrl":"https://doi.org/10.1109/IACC.2017.0047","url":null,"abstract":"The implementation of MIMO with OFDM is an effective and more attractive technique for high data rate transmission and provides burly reliability in wireless communication. It has lot of advantages which can decrease receiver complexity, provides heftiness against narrowband interference and have capability to reduce multipath fading. The major problem of MIMO-OFDM is high PAPR which leads to reduction in Signal to Quantization Noise Ratio of the converters which also degrades the efficiency of power amplifier at transmitter. In this paper we mainly focus on one of scrambling and non scrambling technique Iterative clipping and filtering, and partial Transmit sequence (PTS) which results in better performance. The two techniques once united or combined in the system prove that along with trimming down the PAPR value, the power spectral density also gets smoother.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116503124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensor networks is a topic of much research and interest, because they can be widely applied in different areas such as surveillance, environmental monitoring, etc. For all these applications, localization is the most basic issue. The localization algorithms or techniques can be categorized into two: range-based and range-free. The range-free scheme uses the connectivity information between nodes. In the range-free scheme, nodes that are aware of their location are called anchors, while others that are not are called normal nodes. Anchors are fixed nodes, while the normal nodes are generally mobile. For estimation of their positions, the normal nodes initially gather the information of the positions of anchors as well as their connection, and then calculates its own positions. When compared with range-based techniques, the range-free techniques are more cost-effective, since no additional devices are required. As a result, this paper focuses on the study of two range-free localisation techniques namely, Approximate Point in Triangulation (APIT) and DV-Hop. Thus, we are interested in the investigation of wireless sensor networks on application of these two techniques under varying conditions and parameters.
无线传感器网络是一个备受研究和关注的话题,因为它可以广泛应用于不同的领域,如监视,环境监测等。对于所有这些应用程序,本地化是最基本的问题。定位算法或技术可分为基于距离的和无距离的两种。无距离方案使用节点间的连通性信息。在无距离方案中,知道自己位置的节点称为锚节点,而不知道自己位置的节点称为正常节点。锚点是固定节点,而正常节点一般是移动节点。对于它们的位置估计,法向节点首先收集锚点的位置及其连接信息,然后计算自己的位置。与基于距离的技术相比,无距离技术更具成本效益,因为不需要额外的设备。因此,本文重点研究了两种无距离定位技术,即Approximate Point in Triangulation (APIT)和DV-Hop。因此,我们有兴趣研究无线传感器网络在不同条件和参数下这两种技术的应用。
{"title":"Comparative Analysis of Approximate Point in Triangulation (APIT) and DV-HOP Algorithms for Solving Localization Problem in Wireless Sensor Networks","authors":"Samuel Anthrayose, A. Payal","doi":"10.1109/IACC.2017.0085","DOIUrl":"https://doi.org/10.1109/IACC.2017.0085","url":null,"abstract":"Wireless sensor networks is a topic of much research and interest, because they can be widely applied in different areas such as surveillance, environmental monitoring, etc. For all these applications, localization is the most basic issue. The localization algorithms or techniques can be categorized into two: range-based and range-free. The range-free scheme uses the connectivity information between nodes. In the range-free scheme, nodes that are aware of their location are called anchors, while others that are not are called normal nodes. Anchors are fixed nodes, while the normal nodes are generally mobile. For estimation of their positions, the normal nodes initially gather the information of the positions of anchors as well as their connection, and then calculates its own positions. When compared with range-based techniques, the range-free techniques are more cost-effective, since no additional devices are required. As a result, this paper focuses on the study of two range-free localisation techniques namely, Approximate Point in Triangulation (APIT) and DV-Hop. Thus, we are interested in the investigation of wireless sensor networks on application of these two techniques under varying conditions and parameters.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133470624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy preserving of personal data is done by many techniques like k-anonymity, l-diversity, t-closeness etc., but the techniques proposed are implemented only when there is the availability of a laptop or a computer. Now-a-days people are interested to carry a mobile instead of a lab-top, because some of the works done by a lap-top can also be done by a mobile, like files sharing, images and videos sharing, and many more, but the sharing may lose the privacy of the data. With a specific end goal to give the protection to the information the strategy k-anonymity is utilized, which chooses the k-esteem where at any rate k-1 people whose data additionally shows up in the discharge. This technique is implemented using Android SDK. Whenever the user requests the information, instead of sending original information, the data is sent in an anonymized way. This paper presents the implementation of this technique and results are shown.
{"title":"Implementation of K-Anonymity Using Android SDK","authors":"M. Sheshikala, R. Prakash, D. Rao","doi":"10.1109/IACC.2017.0177","DOIUrl":"https://doi.org/10.1109/IACC.2017.0177","url":null,"abstract":"Privacy preserving of personal data is done by many techniques like k-anonymity, l-diversity, t-closeness etc., but the techniques proposed are implemented only when there is the availability of a laptop or a computer. Now-a-days people are interested to carry a mobile instead of a lab-top, because some of the works done by a lap-top can also be done by a mobile, like files sharing, images and videos sharing, and many more, but the sharing may lose the privacy of the data. With a specific end goal to give the protection to the information the strategy k-anonymity is utilized, which chooses the k-esteem where at any rate k-1 people whose data additionally shows up in the discharge. This technique is implemented using Android SDK. Whenever the user requests the information, instead of sending original information, the data is sent in an anonymized way. This paper presents the implementation of this technique and results are shown.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matrix multiplication is an operation used in many algorithms with a plethora of applications ranging from Image Processing, Signal Processing, to Artificial Neural Networks and Linear algebra. This work aims to showcase the effect of developing matrix multiplication strategies that are less time and processor intensive by effectively handling memory accesses. The paper also touches upon on the advantages of using OpenMP, a multiprocessing toolkit to show the effect of parallelizing matrix multiplication.
{"title":"Cache Friendly Strategies to Optimize Matrix Multiplication","authors":"M. Ananth, S. Vishwas, M. R. Anala","doi":"10.1109/IACC.2017.0020","DOIUrl":"https://doi.org/10.1109/IACC.2017.0020","url":null,"abstract":"Matrix multiplication is an operation used in many algorithms with a plethora of applications ranging from Image Processing, Signal Processing, to Artificial Neural Networks and Linear algebra. This work aims to showcase the effect of developing matrix multiplication strategies that are less time and processor intensive by effectively handling memory accesses. The paper also touches upon on the advantages of using OpenMP, a multiprocessing toolkit to show the effect of parallelizing matrix multiplication.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"588 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117071229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}