Gait is a pattern of biometric movement for human identification. Unlike other biometrics such as fingerprint, iris, face, and voice recognition, human gait can be captured with unobtrusive method. In this paper, several measurements are proposed which uses body frame information in 3D space. Body frame data is generated from depth images captured using Kinect camera. The generated body frames are used for human gait analysis. The angle of lower body parts is measured in a gait cycle. In addition, the length of body parts is measured as a feature for combination with the angle measurements. The measurements are compared to each other from 5 subjects who have similar body type. The difference from comparison of the measurements indicates that the human gait has a potential pattern for human identification.
{"title":"Gait Recognition for Human Identification using Kinect","authors":"Wonjin Kim, Yanggon Kim","doi":"10.1145/3129676.3129715","DOIUrl":"https://doi.org/10.1145/3129676.3129715","url":null,"abstract":"Gait is a pattern of biometric movement for human identification. Unlike other biometrics such as fingerprint, iris, face, and voice recognition, human gait can be captured with unobtrusive method. In this paper, several measurements are proposed which uses body frame information in 3D space. Body frame data is generated from depth images captured using Kinect camera. The generated body frames are used for human gait analysis. The angle of lower body parts is measured in a gait cycle. In addition, the length of body parts is measured as a feature for combination with the angle measurements. The measurements are compared to each other from 5 subjects who have similar body type. The difference from comparison of the measurements indicates that the human gait has a potential pattern for human identification.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121024021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Taheri, Dheeman Saha, Gary Hatfield, E. Byamukama, Sung Y. Shin
This paper proposes a Decision Management System to identify the white mold regions from the soybean fields using Autologistic Statistical Model (ASM) and Remote Sensing (RS) data analysis with commercially available Big Data sets as input data. In order to develop an identification model, numerous types of data need to be considered. In this study, the data that was used is satellite image pixel values, and data gathered from the field such as precipitation, yield, elevation, humidity, wind speed, wind direction and geospatial locations. The model evaluated the outcome using this information as input parameters and provided an overall estimation of the white mold region in the soybean fields. Based on the evaluation of the result, the accuracy rate of the proposed methods 84% which is a promising result due to the fact that each pixel of the satellite image is 30 by 30 meters.
{"title":"Applied Statistical Model and Remote Sensing for Decision Management System for Soybean","authors":"M. Taheri, Dheeman Saha, Gary Hatfield, E. Byamukama, Sung Y. Shin","doi":"10.1145/3129676.3129710","DOIUrl":"https://doi.org/10.1145/3129676.3129710","url":null,"abstract":"This paper proposes a Decision Management System to identify the white mold regions from the soybean fields using Autologistic Statistical Model (ASM) and Remote Sensing (RS) data analysis with commercially available Big Data sets as input data. In order to develop an identification model, numerous types of data need to be considered. In this study, the data that was used is satellite image pixel values, and data gathered from the field such as precipitation, yield, elevation, humidity, wind speed, wind direction and geospatial locations. The model evaluated the outcome using this information as input parameters and provided an overall estimation of the white mold region in the soybean fields. Based on the evaluation of the result, the accuracy rate of the proposed methods 84% which is a promising result due to the fact that each pixel of the satellite image is 30 by 30 meters.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125280192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Imprecise computation can enhance the responsiveness of computing systems but is rarely applied to embedded real-time systems due to its dynamic computation requirements. With the increasing needs of using image processing for multi-mode safety critical systems, it becomes desired on applying imprecise computation on such systems. In this paper, we extended the traditional schedulability analysis mechanisms on mode change to preemptive multi-core systems for imprecise computation so as to assure that there is no deadline miss between mode changes. The proposed method computes the mode change offset to delay the new mode up to a limit and assure that there is no deadline miss. To evaluate the schedulability of the proposed algorithms, we compare our approach with other works by simulation. The result shows that our analysis can increase schedulability by 15% to 30%, compared to the approach proposed by Lee and Shin in 2013. Moreover, the proposed approach increases the number of completed tasks during mode change up to 40%.
{"title":"Schedulability Analysis of Mode Change for Imprecise Computation on Multi-Core Platforms","authors":"C. Shih, Chang-Min Yang","doi":"10.1145/3129676.3129720","DOIUrl":"https://doi.org/10.1145/3129676.3129720","url":null,"abstract":"Imprecise computation can enhance the responsiveness of computing systems but is rarely applied to embedded real-time systems due to its dynamic computation requirements. With the increasing needs of using image processing for multi-mode safety critical systems, it becomes desired on applying imprecise computation on such systems. In this paper, we extended the traditional schedulability analysis mechanisms on mode change to preemptive multi-core systems for imprecise computation so as to assure that there is no deadline miss between mode changes. The proposed method computes the mode change offset to delay the new mode up to a limit and assure that there is no deadline miss. To evaluate the schedulability of the proposed algorithms, we compare our approach with other works by simulation. The result shows that our analysis can increase schedulability by 15% to 30%, compared to the approach proposed by Lee and Shin in 2013. Moreover, the proposed approach increases the number of completed tasks during mode change up to 40%.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122936235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hibernation-based Linux fast booting method is often call the function "shrink_all_memory" in the Linux kernel, and this function is not exported to the kernel module. In this study, the fast booting is designed as a kernel module, so the same kernel module can be used in different versions of Linux kernel.
{"title":"Implementing Hibernation-based Fast Booting as a Device Driver","authors":"Shiwu Lo, Yueyuan Zhang","doi":"10.1145/3129676.3129691","DOIUrl":"https://doi.org/10.1145/3129676.3129691","url":null,"abstract":"The hibernation-based Linux fast booting method is often call the function \"shrink_all_memory\" in the Linux kernel, and this function is not exported to the kernel module. In this study, the fast booting is designed as a kernel module, so the same kernel module can be used in different versions of Linux kernel.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126392581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image denoising is crucial to improve the quality of image visual, their effects, and/ or facilitating image analysis and processing. Image noise can appear in many imaging applications such as remote sensing surveillance and assistant of medical surgery. Noises are often introduced during the image acquisition process when the image acquisition sensor is being interfered. Hence, the image denoising technique is commonly used to restore the original signal through the estimation and approximation. Recently, a sparse coding technique employing the dictionary learning method has been used for image denoising. In this study, we compare a recently proposed image denoising method called Moran's I Vector Median Filter (MIVMF) with the sparse coding method and a traditional scalar median filter for the impulse noise. In these preliminary results, the sparse coding does not give satisfactory results as what we expected. Instead, the MIVMF has the best denoising results.
{"title":"A Comparison on Sparse Coding and Moran's I Method for Image Denoising","authors":"M. Nguyen, C. Hung, Mingon Kang","doi":"10.1145/3129676.3129711","DOIUrl":"https://doi.org/10.1145/3129676.3129711","url":null,"abstract":"Image denoising is crucial to improve the quality of image visual, their effects, and/ or facilitating image analysis and processing. Image noise can appear in many imaging applications such as remote sensing surveillance and assistant of medical surgery. Noises are often introduced during the image acquisition process when the image acquisition sensor is being interfered. Hence, the image denoising technique is commonly used to restore the original signal through the estimation and approximation. Recently, a sparse coding technique employing the dictionary learning method has been used for image denoising. In this study, we compare a recently proposed image denoising method called Moran's I Vector Median Filter (MIVMF) with the sparse coding method and a traditional scalar median filter for the impulse noise. In these preliminary results, the sparse coding does not give satisfactory results as what we expected. Instead, the MIVMF has the best denoising results.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132355105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software information sites such as Stack Overflow, Super User, and Ask Ubuntu allow users to post software-related questions, answer the questions asked by other users, and add tags to their questions. Tagging is a popular system across web communities because allowing users to classify their contents is less costly than employing an expert to categorize them. However, tagging systems suffer from the problem of the tag explosion and the tag synonym. To solve these problems, we propose a tag recommendation method using topic modeling approaches. Topic models have advantages of dimensionality reduction and document similarity. We also emphasize highest topics in calculating document similarity to retrieve more relevant documents. Our tag recommendation method considers the document similarity and the historical tag occurrence to calculate tag scores. Experiment results show that emphasizing highest topic distributions increases overall performance of tag recommendation.
{"title":"An Efficient Tag Recommendation Method using Topic Modeling Approaches","authors":"Beomseok Hong, Yanggon Kim, Sang Ho Lee","doi":"10.1145/3129676.3129709","DOIUrl":"https://doi.org/10.1145/3129676.3129709","url":null,"abstract":"Software information sites such as Stack Overflow, Super User, and Ask Ubuntu allow users to post software-related questions, answer the questions asked by other users, and add tags to their questions. Tagging is a popular system across web communities because allowing users to classify their contents is less costly than employing an expert to categorize them. However, tagging systems suffer from the problem of the tag explosion and the tag synonym. To solve these problems, we propose a tag recommendation method using topic modeling approaches. Topic models have advantages of dimensionality reduction and document similarity. We also emphasize highest topics in calculating document similarity to retrieve more relevant documents. Our tag recommendation method considers the document similarity and the historical tag occurrence to calculate tag scores. Experiment results show that emphasizing highest topic distributions increases overall performance of tag recommendation.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129258429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhi-Guo Chen, Ho-Seok Kang, Shang-nan Yin, Sung-Ryul Kim
In recent cyber incidents, Ransom software (ransomware) causes a major threat to the security of computer systems. Consequently, ransomware detection has become a hot topic in computer security. Unfortunately, current signature-based and static detection model is often easily evadable by obfuscation, polymorphism, compress, and encryption. For overcoming the lack of signature-based and static ransomware detection approach, we have proposed the dynamic ransomware detection system using data mining techniques such as Random Forest (RF), Support Vector Machine (SVM), Simple Logistic (SL) and Naive Bayes (NB) algorithms for detecting known and unknown ransomware. We monitor the actual (dynamic) behaviors of software to generate API calls flow graphs (CFG) and transfer it in a feature space. Thereafter, data normalization and feature selection were applied to select informative features which are the best for discriminating between various categories of software and benign software. Finally, the data mining algorithms were used for building the detection model for judging whether the software is benign software or ransomware. Our experimental results show that our proposed system can be more effective to improve the performance for ransomware detection. Especially, the accuracy and detection rate of our proposed system with Simple Logistic (SL) algorithm can achieve to 98.2% and 97.6%, respectively. Meanwhile, the false positive rate also can be reduced to 1.2%.
{"title":"Automatic Ransomware Detection and Analysis Based on Dynamic API Calls Flow Graph","authors":"Zhi-Guo Chen, Ho-Seok Kang, Shang-nan Yin, Sung-Ryul Kim","doi":"10.1145/3129676.3129704","DOIUrl":"https://doi.org/10.1145/3129676.3129704","url":null,"abstract":"In recent cyber incidents, Ransom software (ransomware) causes a major threat to the security of computer systems. Consequently, ransomware detection has become a hot topic in computer security. Unfortunately, current signature-based and static detection model is often easily evadable by obfuscation, polymorphism, compress, and encryption. For overcoming the lack of signature-based and static ransomware detection approach, we have proposed the dynamic ransomware detection system using data mining techniques such as Random Forest (RF), Support Vector Machine (SVM), Simple Logistic (SL) and Naive Bayes (NB) algorithms for detecting known and unknown ransomware. We monitor the actual (dynamic) behaviors of software to generate API calls flow graphs (CFG) and transfer it in a feature space. Thereafter, data normalization and feature selection were applied to select informative features which are the best for discriminating between various categories of software and benign software. Finally, the data mining algorithms were used for building the detection model for judging whether the software is benign software or ransomware. Our experimental results show that our proposed system can be more effective to improve the performance for ransomware detection. Especially, the accuracy and detection rate of our proposed system with Simple Logistic (SL) algorithm can achieve to 98.2% and 97.6%, respectively. Meanwhile, the false positive rate also can be reduced to 1.2%.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131476727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and development of an efficient wide-area communication and computing approach remains as one of the greatest challenges to harvest the gigantic volume of data for a variety of application service needs in smart cities. The data may be of a varied priority due to the different stages of application services. This paper explores and leverages Software Defined Networking (SDN) to develop a communication approach for data transfers in smart city applications. Specifically, we propose an efficient approach to minimizing the end-to-end delay for data transmission in an SDN infrastructure by using a Timestamp Recording method to compare the arrival and departure of flows and packets over a period of time. The proposed SDN-based approach is designed to be QoS-aware. Hence, network traffic can be delivered according to the priority level of the traffic service. Finally, we evaluate the proposed SDN-based approach on the Global Environment for Network Innovations (GENI) testbed. We compare the Timestamp Recording method to other common delay measurement techniques, and our analysis demonstrates the effectiveness and the proposed SDN-based approach at scale.
{"title":"End-to-End Delay Minimization Approaches Using Software-Defined Networking","authors":"Tommy Chin, M. Rahouti, Kaiqi Xiong","doi":"10.1145/3129676.3129731","DOIUrl":"https://doi.org/10.1145/3129676.3129731","url":null,"abstract":"The design and development of an efficient wide-area communication and computing approach remains as one of the greatest challenges to harvest the gigantic volume of data for a variety of application service needs in smart cities. The data may be of a varied priority due to the different stages of application services. This paper explores and leverages Software Defined Networking (SDN) to develop a communication approach for data transfers in smart city applications. Specifically, we propose an efficient approach to minimizing the end-to-end delay for data transmission in an SDN infrastructure by using a Timestamp Recording method to compare the arrival and departure of flows and packets over a period of time. The proposed SDN-based approach is designed to be QoS-aware. Hence, network traffic can be delivered according to the priority level of the traffic service. Finally, we evaluate the proposed SDN-based approach on the Global Environment for Network Innovations (GENI) testbed. We compare the Timestamp Recording method to other common delay measurement techniques, and our analysis demonstrates the effectiveness and the proposed SDN-based approach at scale.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121025683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alternative splicing refers to the production of multiple mRNA isoforms from a single gene due to alternative selection of exons or splice sites during pre-mRNA splicing. While canonical alternative splicing produces a linear form of RNA by joining an upstream donor site (5' splice site) with a downstream acceptor site (3' splice site), a special form of alternative splicing produces a non-coding circular form of RNA (circular RNA) by ligating a downstream donor site (5' splice site) with an upstream acceptor site (3' splice site); i.e., back-splicing. Over the past two decades, many studies have discovered this special form of alternative splicing that produces a circular form of RNA. Although these circular RNAs have garnered considerable attention in the scientific community for their biogenesis and functions, the focus of these studies has been on exonic circular RNAs (circRNAs: donor site and acceptor site are from exon boundaries) and circular intronic RNAs (ciRNAs: donor and acceptor are from a single intron). This type of approach was conducted in the relative absence of methods for searching another group of circular RNAs, or circular complex RNAs (ccRNAs: either the donor site or acceptor site is not from exon boundaries), that contains at least one exon and one or more flanking introns. Studies of ccRNAs would serve as a significant first step in filling this void. In this paper, we developed a new computational algorithm that can detect all three types of circular RNAs. We applied our algorithm on a set of RNA-seq data to examine the composition of circular RNAs in the given dataset. Surprisingly, our results showed that the new type of circular RNA (ccRNA) was the second most common type of circular RNA while circRNA was the most common type as expected.
{"title":"Circular RNA Detection from High-throughput Sequencing","authors":"Mohamed Chaabane, E. Rouchka, J. Park","doi":"10.1145/3129676.3129734","DOIUrl":"https://doi.org/10.1145/3129676.3129734","url":null,"abstract":"Alternative splicing refers to the production of multiple mRNA isoforms from a single gene due to alternative selection of exons or splice sites during pre-mRNA splicing. While canonical alternative splicing produces a linear form of RNA by joining an upstream donor site (5' splice site) with a downstream acceptor site (3' splice site), a special form of alternative splicing produces a non-coding circular form of RNA (circular RNA) by ligating a downstream donor site (5' splice site) with an upstream acceptor site (3' splice site); i.e., back-splicing. Over the past two decades, many studies have discovered this special form of alternative splicing that produces a circular form of RNA. Although these circular RNAs have garnered considerable attention in the scientific community for their biogenesis and functions, the focus of these studies has been on exonic circular RNAs (circRNAs: donor site and acceptor site are from exon boundaries) and circular intronic RNAs (ciRNAs: donor and acceptor are from a single intron). This type of approach was conducted in the relative absence of methods for searching another group of circular RNAs, or circular complex RNAs (ccRNAs: either the donor site or acceptor site is not from exon boundaries), that contains at least one exon and one or more flanking introns. Studies of ccRNAs would serve as a significant first step in filling this void. In this paper, we developed a new computational algorithm that can detect all three types of circular RNAs. We applied our algorithm on a set of RNA-seq data to examine the composition of circular RNAs in the given dataset. Surprisingly, our results showed that the new type of circular RNA (ccRNA) was the second most common type of circular RNA while circRNA was the most common type as expected.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130094524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the importance of velocity, one of the characteristics of big data (5V: Volume, Variety, Velocity, Veracity, and Value), has been emphasized in the data processing, which has led to several studies on the real-time stream processing, a technology for quick and accurate processing and analyses of big data. In this study, we propose a Squall framework using in-memory technology. Moreover, we provide a description of Squall framework and its operations. This Squall framework can support the real-time event stream processing and micro-batch processing, showing high performance and memory efficiency for stream processing using Go's excellent concurrency and GC (Garbage Collection) available without a virtual machine. Therefore, you can run many jobs on one machine. In addition, the data flows through the memory, the number of operation steps are incorporated to improve the performance. It provides relatively good performance compared to existing Apache Storm and spark streaming. In conclusion, it can be used as a general-purpose big data processing framework because it can overcome the drawbacks of existing Apache storm or Spark streaming by introducing the advantages of Go language.
{"title":"Squall: Stream Processing and Analysis Model Design","authors":"J. An, J. Son, Jiwoo Kang","doi":"10.1145/3129676.3129707","DOIUrl":"https://doi.org/10.1145/3129676.3129707","url":null,"abstract":"Recently, the importance of velocity, one of the characteristics of big data (5V: Volume, Variety, Velocity, Veracity, and Value), has been emphasized in the data processing, which has led to several studies on the real-time stream processing, a technology for quick and accurate processing and analyses of big data. In this study, we propose a Squall framework using in-memory technology. Moreover, we provide a description of Squall framework and its operations. This Squall framework can support the real-time event stream processing and micro-batch processing, showing high performance and memory efficiency for stream processing using Go's excellent concurrency and GC (Garbage Collection) available without a virtual machine. Therefore, you can run many jobs on one machine. In addition, the data flows through the memory, the number of operation steps are incorporated to improve the performance. It provides relatively good performance compared to existing Apache Storm and spark streaming. In conclusion, it can be used as a general-purpose big data processing framework because it can overcome the drawbacks of existing Apache storm or Spark streaming by introducing the advantages of Go language.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128751891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}