This paper proposes a weed detection mechanism, where the carrot leaves are segmented from the weeds (mostly Chamomile). In the early stage, both weeds and carrot leaves are intermixed with each other and have similar color texture. This makes it difficult to identify without the help of the domain experts. Therefore, it is essential to remove the weed regions so that the carrot plants can grow without any interruptions. The process of identifying the weeds become more challenging when both plant and weed regions overlap (inter-leaves). The proposed method takes account of this problem and breaks down the identification mechanism into three major components: Image Segmentation, Feature Extraction, and Classification. In the Image Segmentation stage, K-Means clustering is applied to select the images that will be used for the identification purpose. Next, in the Feature Extraction stage structural information of the weed and leaves will be extracted from the lower unit images. Furthermore, to extract the information from the Region of Interest (ROI), Histogram of Oriented Gradient (HoG) is used to locate and label all the weed and carrot leaves regions. In the Classification stage, Support Vector Machine (SVM) analyzes all the information and labels the regions. This method of weed detection is effective as it automates the identification process and fewer herbicides will be used, which in-turn benefits the environment. The proposed method successfully classifies the plant regions at a success rate of 92% using an open dataset and outperformed some of the previous approaches.
{"title":"Development of Inter-Leaves Weed and Plant Regions Identification Algorithm using Histogram of Oriented Gradient and K-Means Clustering","authors":"Dheeman Saha, George Hamer, Ji Young Lee","doi":"10.1145/3129676.3129700","DOIUrl":"https://doi.org/10.1145/3129676.3129700","url":null,"abstract":"This paper proposes a weed detection mechanism, where the carrot leaves are segmented from the weeds (mostly Chamomile). In the early stage, both weeds and carrot leaves are intermixed with each other and have similar color texture. This makes it difficult to identify without the help of the domain experts. Therefore, it is essential to remove the weed regions so that the carrot plants can grow without any interruptions. The process of identifying the weeds become more challenging when both plant and weed regions overlap (inter-leaves). The proposed method takes account of this problem and breaks down the identification mechanism into three major components: Image Segmentation, Feature Extraction, and Classification. In the Image Segmentation stage, K-Means clustering is applied to select the images that will be used for the identification purpose. Next, in the Feature Extraction stage structural information of the weed and leaves will be extracted from the lower unit images. Furthermore, to extract the information from the Region of Interest (ROI), Histogram of Oriented Gradient (HoG) is used to locate and label all the weed and carrot leaves regions. In the Classification stage, Support Vector Machine (SVM) analyzes all the information and labels the regions. This method of weed detection is effective as it automates the identification process and fewer herbicides will be used, which in-turn benefits the environment. The proposed method successfully classifies the plant regions at a success rate of 92% using an open dataset and outperformed some of the previous approaches.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127339492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper aims to resolve the problem associated with increased data dimensionality in datasets using modified Non-integer Matrix Factorization (NMF). Further, the increased dimensionality arising due to non-orthogonally from NMF is resolved using Cholesky decomposition (cd-NMF). The cd-NMF is used to extract the feature vector from the dataset and the data vector is linearly mapped from upper triangular matrix obtained from the Cholesky decomposition. The experiment is validated in terms of accuracy and normalized mutual information metrics again three different text databases with varied patterns. Further, the results proves that the proposed technique fits well with larger instances in finding the documents as per the query, than NMF, NPNMF, MM-NMF, RNMF, GNMF, HNMF and cd-NMF.
{"title":"Reducing Dimensionality Using NMF Based Cholesky Decomposition","authors":"Jasem M. Alostad","doi":"10.1145/3129676.3129697","DOIUrl":"https://doi.org/10.1145/3129676.3129697","url":null,"abstract":"This paper aims to resolve the problem associated with increased data dimensionality in datasets using modified Non-integer Matrix Factorization (NMF). Further, the increased dimensionality arising due to non-orthogonally from NMF is resolved using Cholesky decomposition (cd-NMF). The cd-NMF is used to extract the feature vector from the dataset and the data vector is linearly mapped from upper triangular matrix obtained from the Cholesky decomposition. The experiment is validated in terms of accuracy and normalized mutual information metrics again three different text databases with varied patterns. Further, the results proves that the proposed technique fits well with larger instances in finding the documents as per the query, than NMF, NPNMF, MM-NMF, RNMF, GNMF, HNMF and cd-NMF.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinmang Jung, Jongho Shin, Jiman Hong, Jinwoo Lee, Tei-Wei Kuo
With the emergence of increasingly heterogeneous devices and networks, computing systems are required to support a variety of services with different quality of service requirements. The degree of heterogeneity makes it more difficult to fairly allocate resources based on the client's weight. Moreover, as the systems become larger, their performance can worsen significantly. In this paper, we present a fair scheduling algorithm for multiprocessor systems using a task satisfaction index. The proposed algorithm, called LZF, aims to achieve a high level of proportional fairness for the heterogeneous tasks. The evaluation results show that its service time error is bounded between -1 and 1, and the LZF achieves the best proportional fairness among existing scheduling algorithms with respect to the average service time error.
{"title":"A Fair Scheduling Algorithm for Multiprocessor Systems Using a Task Satisfaction Index","authors":"Jinmang Jung, Jongho Shin, Jiman Hong, Jinwoo Lee, Tei-Wei Kuo","doi":"10.1145/3129676.3129736","DOIUrl":"https://doi.org/10.1145/3129676.3129736","url":null,"abstract":"With the emergence of increasingly heterogeneous devices and networks, computing systems are required to support a variety of services with different quality of service requirements. The degree of heterogeneity makes it more difficult to fairly allocate resources based on the client's weight. Moreover, as the systems become larger, their performance can worsen significantly. In this paper, we present a fair scheduling algorithm for multiprocessor systems using a task satisfaction index. The proposed algorithm, called LZF, aims to achieve a high level of proportional fairness for the heterogeneous tasks. The evaluation results show that its service time error is bounded between -1 and 1, and the LZF achieves the best proportional fairness among existing scheduling algorithms with respect to the average service time error.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132962496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software Quality Attributes (QAs) can be categorised as either internal to the system as experienced by the developers or external to the system perceived by the end users. These QA categories have trade-off among them - an emphasis on internal QA may result in a compromise of an external QA. For example, there is a trade-off between maintainability and performance. Model-driven development approaches manage this trade-off and increase the degree of internal QA maintainability. In this work, we propose an ontology-based communication mechanism among software components to handle the trade-off. The approach increases the degree of internal QAs such as modifiability, maintainability, testability during the design and development phases without compromising the external QAs for the end users during the operation phase. We also evaluate a prototype system to validate the proposed approach using Software Architecture Analysis Method (SAAM). It is also easier to integrate into the software development life cycle as compared to existing model-driven approaches.
{"title":"Internal Quality to External Quality: an Approach to Manage Conflicts","authors":"S. Kalra, T. Prabhakar","doi":"10.1145/3129676.3129714","DOIUrl":"https://doi.org/10.1145/3129676.3129714","url":null,"abstract":"Software Quality Attributes (QAs) can be categorised as either internal to the system as experienced by the developers or external to the system perceived by the end users. These QA categories have trade-off among them - an emphasis on internal QA may result in a compromise of an external QA. For example, there is a trade-off between maintainability and performance. Model-driven development approaches manage this trade-off and increase the degree of internal QA maintainability. In this work, we propose an ontology-based communication mechanism among software components to handle the trade-off. The approach increases the degree of internal QAs such as modifiability, maintainability, testability during the design and development phases without compromising the external QAs for the end users during the operation phase. We also evaluate a prototype system to validate the proposed approach using Software Architecture Analysis Method (SAAM). It is also easier to integrate into the software development life cycle as compared to existing model-driven approaches.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133581735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As energy supply changes become more important, interest in the field of efficient energy management is increasing. In this paper, demand forecasting is performed through time series analysis of power big data. And we measured the performance through predictive comparison of time series prediction model.
{"title":"A Study on Prediction Comparison by Time Series Analysis Model of Load Big data","authors":"Jaehyung Kim, Taehyoung Kim, K. Ham","doi":"10.1145/3129676.3129719","DOIUrl":"https://doi.org/10.1145/3129676.3129719","url":null,"abstract":"As energy supply changes become more important, interest in the field of efficient energy management is increasing. In this paper, demand forecasting is performed through time series analysis of power big data. And we measured the performance through predictive comparison of time series prediction model.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124148946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Long short-term memory (LSTM) is widely used for processing time sequence data like language and human skeletal data, and its importance is continuously increasing. In particular, recent studies have shown that higher performance can be obtained by using deep LSTM instead of single LSTM for language processing and action recognition tasks. In this paper, we compared the performance between single LSTM and deep LSTM for a different time sequence processing task, single object tracking. We verified that using deep LSTM can significantly improve the performance compared to single LSTM. This implies that deep LSTM is an effective model to overcome current technical limitations such as object deformation and occlusion. We expect this study will lead to the development of a stable tracker robust to object deformation and occlusion in the near future.
{"title":"Comparison of single and deep long short-term memory for single object tracking","authors":"KangUn Jo, Jung-Hui Im, Dae-Shik Kim","doi":"10.1145/3129676.3129681","DOIUrl":"https://doi.org/10.1145/3129676.3129681","url":null,"abstract":"Long short-term memory (LSTM) is widely used for processing time sequence data like language and human skeletal data, and its importance is continuously increasing. In particular, recent studies have shown that higher performance can be obtained by using deep LSTM instead of single LSTM for language processing and action recognition tasks. In this paper, we compared the performance between single LSTM and deep LSTM for a different time sequence processing task, single object tracking. We verified that using deep LSTM can significantly improve the performance compared to single LSTM. This implies that deep LSTM is an effective model to overcome current technical limitations such as object deformation and occlusion. We expect this study will lead to the development of a stable tracker robust to object deformation and occlusion in the near future.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124167338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seunghye Han, Wonseok Choi, Rayan Muwafiq, Yunmook Nah
Hadoop and Spark are well-known big data processing platforms. The main technologies of Hadoop are Hadoop Distributed File System and MapReduce processing. Hadoop stores intermediary data on Hadoop Distributed File System, which is a disk-based distributed file system, while Spark stores intermediary data in the memories of distributed computing nodes as Resilient Distributed Dataset. In this paper, we show how memory size affects distributed processing of large volume of data, by comparing the running time of K-means algorithm of HiBench benchmark on Hadoop and Spark clusters, with different size of memories allocated to data nodes. Our results show that Spark cluster is faster than Hadoop cluster as long as the memory size is big enough for the data size. But, with the increase of the data size, Hadoop cluster outperforms Spark cluster. When data size is bigger than memory cache, Spark has to replace disk data with memory cached data, and this situation causes performance degradation.
{"title":"Impact of Memory Size on Bigdata Processing based on Hadoop and Spark","authors":"Seunghye Han, Wonseok Choi, Rayan Muwafiq, Yunmook Nah","doi":"10.1145/3129676.3129688","DOIUrl":"https://doi.org/10.1145/3129676.3129688","url":null,"abstract":"Hadoop and Spark are well-known big data processing platforms. The main technologies of Hadoop are Hadoop Distributed File System and MapReduce processing. Hadoop stores intermediary data on Hadoop Distributed File System, which is a disk-based distributed file system, while Spark stores intermediary data in the memories of distributed computing nodes as Resilient Distributed Dataset. In this paper, we show how memory size affects distributed processing of large volume of data, by comparing the running time of K-means algorithm of HiBench benchmark on Hadoop and Spark clusters, with different size of memories allocated to data nodes. Our results show that Spark cluster is faster than Hadoop cluster as long as the memory size is big enough for the data size. But, with the increase of the data size, Hadoop cluster outperforms Spark cluster. When data size is bigger than memory cache, Spark has to replace disk data with memory cached data, and this situation causes performance degradation.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124441728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyoungmin Kim, Jeonghwan Lee, Seonguk Lee, Jiman Hong
The number of malicious code targeting the Android platform is increasing day by day. The biggest difficulty in analyzing the malicious code is the large amount of source code that needs to be analyzed. The larger the size of the source code, the longer the analyzing time and the longer the analyzing time, the less accurate the result of the analysis. Generally, the Android application programmers tend to use a lot of 3rd party libraries and it causes the size of the source code to increase. The use of 3rd-party library has the advantage of allowing programmers to easily develop applications, but it has the disadvantage of including unnecessary codes in the source code. For analyzing a Android application efficiently it would be better exclude well known normal code, which is called, white list from the original source code. In this paper, we present the Whitelist for Android applications. The Whitelist contains feature information from the 3rd-party library known as normal. It can be used for reducing the amount of source code to by analyzed when a Malware Analyst analyze the malicious codes in Android applications. Experiments show that the number of methods to analyze when using malicious code using Whitelist Database is greatly reduced and analysis time can be shortened.
{"title":"Whitelist for Analyzing Android Malware","authors":"Kyoungmin Kim, Jeonghwan Lee, Seonguk Lee, Jiman Hong","doi":"10.1145/3129676.3129726","DOIUrl":"https://doi.org/10.1145/3129676.3129726","url":null,"abstract":"The number of malicious code targeting the Android platform is increasing day by day. The biggest difficulty in analyzing the malicious code is the large amount of source code that needs to be analyzed. The larger the size of the source code, the longer the analyzing time and the longer the analyzing time, the less accurate the result of the analysis. Generally, the Android application programmers tend to use a lot of 3rd party libraries and it causes the size of the source code to increase. The use of 3rd-party library has the advantage of allowing programmers to easily develop applications, but it has the disadvantage of including unnecessary codes in the source code. For analyzing a Android application efficiently it would be better exclude well known normal code, which is called, white list from the original source code. In this paper, we present the Whitelist for Android applications. The Whitelist contains feature information from the 3rd-party library known as normal. It can be used for reducing the amount of source code to by analyzed when a Malware Analyst analyze the malicious codes in Android applications. Experiments show that the number of methods to analyze when using malicious code using Whitelist Database is greatly reduced and analysis time can be shortened.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116963856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwang-Jae Lee, Jang‐Yeol Kim, Seong‐Ho Son, Seok-Jae Kang
A microwave (MW) focusing technique for non-invasive thermal therapy is presented. The proposed technique provides a thermal focusing at localized tissues as a cancer tissue and a deep tissue for muscular disorder treatment. We employed a time reversal technique for the targeted MW focusing and computational solvers for electromagnetic and thermal analysis are developed. To verify the proposed technique, we applied to an anatomically electromagnetic model based on magnetic resonance images. This work is a computational study to predict performances of the MW focusing system in our future work.
{"title":"MR images-Based Microwave Focusing for Thermal Therapy","authors":"Kwang-Jae Lee, Jang‐Yeol Kim, Seong‐Ho Son, Seok-Jae Kang","doi":"10.1145/3129676.3129728","DOIUrl":"https://doi.org/10.1145/3129676.3129728","url":null,"abstract":"A microwave (MW) focusing technique for non-invasive thermal therapy is presented. The proposed technique provides a thermal focusing at localized tissues as a cancer tissue and a deep tissue for muscular disorder treatment. We employed a time reversal technique for the targeted MW focusing and computational solvers for electromagnetic and thermal analysis are developed. To verify the proposed technique, we applied to an anatomically electromagnetic model based on magnetic resonance images. This work is a computational study to predict performances of the MW focusing system in our future work.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116487064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intrusion detection has been a major issue in network security. Signature-based intrusion systems use intrusion detection rules for detecting intrusion. However, writing intrusion detection rules is difficult and requires a considerable knowledge on various fields. Also attackers can modify previous attacks to escape intrusion detection rules. In this paper we deal with the problem of detecting "modified" attacks using original intrusion detection rules. We show a simple method of reporting substrings in the network stream which have approximate matches with at least one of the network intrusion detection rules, based on the notion of q-grams and the longest increasing subsequences. Experimental results showed that our approach can detect modified attacks, which are modeled as strings which can match the intrusion detection rules after edit operations.
{"title":"Mining intrusion detection rules with longest increasing subsequences of q-grams","authors":"Inbok Lee, Sung-il Oh","doi":"10.1145/3129676.3129724","DOIUrl":"https://doi.org/10.1145/3129676.3129724","url":null,"abstract":"Intrusion detection has been a major issue in network security. Signature-based intrusion systems use intrusion detection rules for detecting intrusion. However, writing intrusion detection rules is difficult and requires a considerable knowledge on various fields. Also attackers can modify previous attacks to escape intrusion detection rules. In this paper we deal with the problem of detecting \"modified\" attacks using original intrusion detection rules. We show a simple method of reporting substrings in the network stream which have approximate matches with at least one of the network intrusion detection rules, based on the notion of q-grams and the longest increasing subsequences. Experimental results showed that our approach can detect modified attacks, which are modeled as strings which can match the intrusion detection rules after edit operations.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127308504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}