The special track on the Computational Intelligence and Video & Image Analysis (CIVIA) is a forum for engineers, researchers and practitioners throughout the world to share technical ideas and experiences related to the implementation and applications of Computational Intelligence, to Video & Image Analysis, and even to Systems Biology & BioMedicine. Many conferences have been dedicated to Evolutionary Computing (ICEC, GECCO, PPSN, etc), Video & Image Analysis (ICIAR, ICIAP, ICASSP, IJCAI, etc) and Systems Biology & BioMedical Engineering (ICSB, RECOMB, BME, etc), but they don't offer much on the blending of Computational Logic, Boolean Satisfiability and Soft Computing tools to address practical applications of Image Analysis and Bio Systems Modeling and Simulations. Thus, the research papers involved with applying computational intelligence techniques to video and image analyses would be welcome no matter how theoretical they are, should they have practical applications.
{"title":"Session details: CIVIA - computational intelligence and video & image analysis track","authors":"P. Lecca, J. Corchado","doi":"10.1145/3243943","DOIUrl":"https://doi.org/10.1145/3243943","url":null,"abstract":"The special track on the Computational Intelligence and Video & Image Analysis (CIVIA) is a forum for engineers, researchers and practitioners throughout the world to share technical ideas and experiences related to the implementation and applications of Computational Intelligence, to Video & Image Analysis, and even to Systems Biology & BioMedicine. Many conferences have been dedicated to Evolutionary Computing (ICEC, GECCO, PPSN, etc), Video & Image Analysis (ICIAR, ICIAP, ICASSP, IJCAI, etc) and Systems Biology & BioMedical Engineering (ICSB, RECOMB, BME, etc), but they don't offer much on the blending of Computational Logic, Boolean Satisfiability and Soft Computing tools to address practical applications of Image Analysis and Bio Systems Modeling and Simulations. Thus, the research papers involved with applying computational intelligence techniques to video and image analyses would be welcome no matter how theoretical they are, should they have practical applications.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73790788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Souza, Marcos C. R. Seruffo, M. K. Eliasquevici
This article presents a case study that uses recommendations from the literature of Human-computer interaction (HCI) for adaptation of a second screen application entitled DFapp, in order to improve the user experience. The paper explains how an application was tested including the implementation of HCI recommendations, with final users. A set of guidelines sourced from recommendations found in the literature permitted the improvement of the application's interactivity and were essential to this process. With this study, we intend not only to contribute to the development of a better understanding of the combined use of TV and second screen in diverse environments, but also to provide support literature for second screen application developers. As a result, it is presented recommendations concerning not only general concepts for HCI, but also a core recommendation for second screen applications.
{"title":"Recommendations to improve user experience in second screen applications: a case study","authors":"D. Souza, Marcos C. R. Seruffo, M. K. Eliasquevici","doi":"10.1145/3019612.3019688","DOIUrl":"https://doi.org/10.1145/3019612.3019688","url":null,"abstract":"This article presents a case study that uses recommendations from the literature of Human-computer interaction (HCI) for adaptation of a second screen application entitled DFapp, in order to improve the user experience. The paper explains how an application was tested including the implementation of HCI recommendations, with final users. A set of guidelines sourced from recommendations found in the literature permitted the improvement of the application's interactivity and were essential to this process. With this study, we intend not only to contribute to the development of a better understanding of the combined use of TV and second screen in diverse environments, but also to provide support literature for second screen application developers. As a result, it is presented recommendations concerning not only general concepts for HCI, but also a core recommendation for second screen applications.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73088148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.
{"title":"Handling bitcoin conflicts through a glimpse of structure","authors":"Thibaut Lajoie-Mazenc, R. Ludinard, E. Anceaume","doi":"10.1145/3019612.3019657","DOIUrl":"https://doi.org/10.1145/3019612.3019657","url":null,"abstract":"Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72724297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ACM SAC 2017 Bioinformatics Track aimed at promoting current advances in biological sciences relying on analytical methods that integrate mathematical, physical and computer sciences. The track is primary devoted to publish papers very focused on timely well-defined biological issues whose solution have benefited from the use of computational techniques or the implementation of new ones. The track solicited the submission of papers presenting a biological problem in a comprehensive way and (part of) its solution obtained through the application of computational methods including analysis, modeling and simulation.
{"title":"Session details: BIO - computational biology and bioinformatics track","authors":"","doi":"10.1145/3243941","DOIUrl":"https://doi.org/10.1145/3243941","url":null,"abstract":"The ACM SAC 2017 Bioinformatics Track aimed at promoting current advances in biological sciences relying on analytical methods that integrate mathematical, physical and computer sciences. The track is primary devoted to publish papers very focused on timely well-defined biological issues whose solution have benefited from the use of computational techniques or the implementation of new ones. The track solicited the submission of papers presenting a biological problem in a comprehensive way and (part of) its solution obtained through the application of computational methods including analysis, modeling and simulation.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"274 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77885173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paolo Viviani, Marco Aldinucci, M. Torquati, Roberto d'lppolito
The Armadillo C++ library provides programmers with a high-level Matlab-like syntax for linear algebra. Its design aims at providing a good balance between speed and ease of use. It can be linked with different back-ends, i.e. different LAPACK-compliant libraries. In this work we present a novel run-time support of Armadillo, which gracefully extends mainstream implementation to enable back-end switching without recompilation and multiple back-end support. The extension is specifically designed to not affect Armadillo class template prototypes, thus to be easily interoperable with future evolutions of the Armadillo library itself. The proposed software stack is then tested for functionality and performance against a kernel code extracted from an industrial application.
{"title":"Multiple back-end support for the armadillo linear algebra interface","authors":"Paolo Viviani, Marco Aldinucci, M. Torquati, Roberto d'lppolito","doi":"10.1145/3019612.3019743","DOIUrl":"https://doi.org/10.1145/3019612.3019743","url":null,"abstract":"The Armadillo C++ library provides programmers with a high-level Matlab-like syntax for linear algebra. Its design aims at providing a good balance between speed and ease of use. It can be linked with different back-ends, i.e. different LAPACK-compliant libraries. In this work we present a novel run-time support of Armadillo, which gracefully extends mainstream implementation to enable back-end switching without recompilation and multiple back-end support. The extension is specifically designed to not affect Armadillo class template prototypes, thus to be easily interoperable with future evolutions of the Armadillo library itself. The proposed software stack is then tested for functionality and performance against a kernel code extracted from an industrial application.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80216106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work is dedicated to improve the performance of the fsync(), which is one of the most expensive system calls in UNIX operating systems. Due to the recent advancement of the Flash storage based storage device, the storage device can flush the data blocks in order of magnitudes faster than the legacy HDDs. Often, the rate of flushing the data blocks to the storage device prevails the rate of CPU updating the file system time. The amount of the dirty blocks created in the system depends on the timer interrupt interval of the kernel. Read and write operations update the atime and mtime metadata, respectively. These timestamps are useful; however, there is significant performance degradation due to frequent updates of the metadata. Now in the file system, atime has several options to mediate between usefulness and performance efficiency. Most of the Database Management Systems frequently perform fsync() to guarantee the consistency of user data. The synchronous writes involve journaling overhead of updating mtime metadata in EXT4 file system. However, the effect of frequent update of mtime on write intensive workload has been overlooked. We introduce coarse-grained mtime update scheme to increase the mtime/ctime timestamp update interval while maintaining the same level of resolution for kernel time interrupts. As a result, coarse-grained update interval scheme reduces the journaling overhead with the least effort. The experiment results show that the I/O performance of random workload on mobile and PC increased about 7+ and 107+ against the default mtime update interval, respectively. The result of insert operations on PERSIST mode of SQLite on mobile and PC shows 8.4+ and 45.1+ of I/O performance increase, respectively. On MySQL OLTP workload, the performance increased by 7.9+.
{"title":"Coarse-grained mtime update for better fsync() performance","authors":"H. Son, Seongjin Lee, Gyeongyeol Choi, Y. Won","doi":"10.1145/3019612.3019739","DOIUrl":"https://doi.org/10.1145/3019612.3019739","url":null,"abstract":"This work is dedicated to improve the performance of the fsync(), which is one of the most expensive system calls in UNIX operating systems. Due to the recent advancement of the Flash storage based storage device, the storage device can flush the data blocks in order of magnitudes faster than the legacy HDDs. Often, the rate of flushing the data blocks to the storage device prevails the rate of CPU updating the file system time. The amount of the dirty blocks created in the system depends on the timer interrupt interval of the kernel. Read and write operations update the atime and mtime metadata, respectively. These timestamps are useful; however, there is significant performance degradation due to frequent updates of the metadata. Now in the file system, atime has several options to mediate between usefulness and performance efficiency. Most of the Database Management Systems frequently perform fsync() to guarantee the consistency of user data. The synchronous writes involve journaling overhead of updating mtime metadata in EXT4 file system. However, the effect of frequent update of mtime on write intensive workload has been overlooked. We introduce coarse-grained mtime update scheme to increase the mtime/ctime timestamp update interval while maintaining the same level of resolution for kernel time interrupts. As a result, coarse-grained update interval scheme reduces the journaling overhead with the least effort. The experiment results show that the I/O performance of random workload on mobile and PC increased about 7+ and 107+ against the default mtime update interval, respectively. The result of insert operations on PERSIST mode of SQLite on mobile and PC shows 8.4+ and 45.1+ of I/O performance increase, respectively. On MySQL OLTP workload, the performance increased by 7.9+.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81620092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Neves, R. Vilaça, J. Pereira, R. Oliveira
The ability of NoSQL systems to scale better than traditional relational databases motivates a large set of applications to migrate their data to NoSQL systems, even without aiming to exploit the provided schema flexibility. However, accessing structured data is costly due to such flexibility, incurring in a lot of bandwidth and processing unit usage. In this paper, we analyse this cost in Apache HBase and propose a new scan operation, named Prepared Scan, that optimizes the access to data structured in a regular manner by taking advantage of a well-known schema by application. Using an industry standard benchmark, we show that Prepared Scan improves throughput up to 29+ and decreases network bandwidth consumption up to 20+.
{"title":"Prepared scan: efficient retrieval of structured data from HBase","authors":"Francisco Neves, R. Vilaça, J. Pereira, R. Oliveira","doi":"10.1145/3019612.3019863","DOIUrl":"https://doi.org/10.1145/3019612.3019863","url":null,"abstract":"The ability of NoSQL systems to scale better than traditional relational databases motivates a large set of applications to migrate their data to NoSQL systems, even without aiming to exploit the provided schema flexibility. However, accessing structured data is costly due to such flexibility, incurring in a lot of bandwidth and processing unit usage. In this paper, we analyse this cost in Apache HBase and propose a new scan operation, named Prepared Scan, that optimizes the access to data structured in a regular manner by taking advantage of a well-known schema by application. Using an industry standard benchmark, we show that Prepared Scan improves throughput up to 29+ and decreases network bandwidth consumption up to 20+.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84227207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Page replacement algorithm is one of the core components in modern operating systems. It decides which victim page to evict from main memory by analyzing attributes of pages referenced. The evicted page is then moved to backing store in the memory hierarchy, and moved back to main memory once referenced again. The technique that utilizes storage as part of memory is called swapping. However, there is a non-trivial performance gap between memory and storage. For example, performance of permanent storage like Solid-State Disk (SSD) is much slower, e.g. 104 longer write latency, than DRAM [9]. As a result, swapping between main memory and storage causes system performance to a discernible drop. Nevertheless, a higher hit ratio of page replacement algorithm implies less I/O waits to storage, and consequently a better performance overall. In this paper we propose a log-based page replacement algorithm that assumes better hints for page replacement can be approached through analysis of page reference history. The algorithm selects victim page that holds lowest reference rate in a window-sized log. A simulation shows that our method outperforms conventional page replacement algorithms by 11+ at best.
{"title":"A page replacement algorithm based on frequency derived from reference history","authors":"Hong-Bin Tsai, C. Lei","doi":"10.1145/3019612.3019737","DOIUrl":"https://doi.org/10.1145/3019612.3019737","url":null,"abstract":"Page replacement algorithm is one of the core components in modern operating systems. It decides which victim page to evict from main memory by analyzing attributes of pages referenced. The evicted page is then moved to backing store in the memory hierarchy, and moved back to main memory once referenced again. The technique that utilizes storage as part of memory is called swapping. However, there is a non-trivial performance gap between memory and storage. For example, performance of permanent storage like Solid-State Disk (SSD) is much slower, e.g. 104 longer write latency, than DRAM [9]. As a result, swapping between main memory and storage causes system performance to a discernible drop. Nevertheless, a higher hit ratio of page replacement algorithm implies less I/O waits to storage, and consequently a better performance overall. In this paper we propose a log-based page replacement algorithm that assumes better hints for page replacement can be approached through analysis of page reference history. The algorithm selects victim page that holds lowest reference rate in a window-sized log. A simulation shows that our method outperforms conventional page replacement algorithms by 11+ at best.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"35 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84384339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: SVT - software verification and testing track","authors":"","doi":"10.1145/3243967","DOIUrl":"https://doi.org/10.1145/3243967","url":null,"abstract":"","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84407598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we proposed a cost-effective solution to use Light Emitting Diode (LED) street lights for multi-purposes rather than their basic lighting purpose while generating various useful light patterns. The generated patterns help citizens in 1) guiding them about the safe route in case of any event, emergency, or any disaster like a fire or flood 2) conveying them any message or information 3) providing them entertainment or fun, etc. We have developed an API for Intel-Edison platform that can generate thousands of dynamic light patterns while working with colors, light intensity, blinking rate, and delay. The LED lights are distributedly connected via Wi-Fi by providing P2P as well as central communication between them. The challenges of managing delays between light switching and offering time synchronization are accomplished by deploying Precision Time Protocol (PTP). We evaluated the system with a user study as well as by the system response time.
{"title":"Distributed smart street LED lights for human satisfaction in smart city: student research abstract","authors":"M Mazhar Rathore","doi":"10.1145/3019612.3019925","DOIUrl":"https://doi.org/10.1145/3019612.3019925","url":null,"abstract":"In this paper, we proposed a cost-effective solution to use Light Emitting Diode (LED) street lights for multi-purposes rather than their basic lighting purpose while generating various useful light patterns. The generated patterns help citizens in 1) guiding them about the safe route in case of any event, emergency, or any disaster like a fire or flood 2) conveying them any message or information 3) providing them entertainment or fun, etc. We have developed an API for Intel-Edison platform that can generate thousands of dynamic light patterns while working with colors, light intensity, blinking rate, and delay. The LED lights are distributedly connected via Wi-Fi by providing P2P as well as central communication between them. The challenges of managing delays between light switching and offering time synchronization are accomplished by deploying Precision Time Protocol (PTP). We evaluated the system with a user study as well as by the system response time.","PeriodicalId":20728,"journal":{"name":"Proceedings of the Symposium on Applied Computing","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2017-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85152971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}