Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024664
I. Gravalos, A. Siokis, P. Kokkinos, Emmanouel Varvarigos
Energy consumption and the associated costs constitute a crucial issue concerning the design and operation of data networks and data centers. Energy-awareness is required in all levels, ranging from physical layer to algorithms, protocols and applications. Architecture-wise, a promising solution for tackling the increasing energy requirements is the deployment of optics at both long and shorter distances, including within data centers. Vertical Cavity Surface Emitting Lasers (VCSEL) constitute a popular photonic transmitter technology used in numerous short-range applications, providing also the ability to reduce energy consumption by scaling down the transmission bit rate. In this study we focus on the algorithmic aspects of energy management by proposing an OptiMal EnerGy Aware (OMEGA) routing algorithm to operate in optical networks utilizing VCSEL-based opto-electronic links. The algorithm leverages the capability of VCSELs to adapt the energy dissipation with respect to the transmission bit rate. Simulation results, under various traffic patterns, show that OMEGA balances efficiently the traffic load over the network's links, resulting in high throughput and low energy consumption.
{"title":"Routing algorithm with smart energy management on VCSEL interconnected networks","authors":"I. Gravalos, A. Siokis, P. Kokkinos, Emmanouel Varvarigos","doi":"10.1109/ISCC.2017.8024664","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024664","url":null,"abstract":"Energy consumption and the associated costs constitute a crucial issue concerning the design and operation of data networks and data centers. Energy-awareness is required in all levels, ranging from physical layer to algorithms, protocols and applications. Architecture-wise, a promising solution for tackling the increasing energy requirements is the deployment of optics at both long and shorter distances, including within data centers. Vertical Cavity Surface Emitting Lasers (VCSEL) constitute a popular photonic transmitter technology used in numerous short-range applications, providing also the ability to reduce energy consumption by scaling down the transmission bit rate. In this study we focus on the algorithmic aspects of energy management by proposing an OptiMal EnerGy Aware (OMEGA) routing algorithm to operate in optical networks utilizing VCSEL-based opto-electronic links. The algorithm leverages the capability of VCSELs to adapt the energy dissipation with respect to the transmission bit rate. Simulation results, under various traffic patterns, show that OMEGA balances efficiently the traffic load over the network's links, resulting in high throughput and low energy consumption.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124561037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024576
E. Chandler
Data-buffering techniques currently used in time-division-multiple-access (TDMA) networks are described, including a short-delay technique called sub-framing. The sub-framing technique produces minimum user-to-user streamed-data throughput delays but results in data-transfer interruptions when TDMA time-slot resources have reassigned positions in the time frame. A simple modification to the sub-framing data-buffering technique is proposed, and it accommodates dynamic TDMA time-slot resource reassignments without data-transfer interruptions. This proposed data buffering technique provides the same minimum user-to-user streamed-data throughput delay, which is approximately equal to the difference between the TDMA frame time and the TDMA time-slot duration, but does not have data-transfer interruptions when TDMA time-slot resources have reassigned positions in the time frame.
{"title":"A TDMA data-buffering technique that provides minimum throughput delays while allowing time-slot reassignments without data-transfer interruptions","authors":"E. Chandler","doi":"10.1109/ISCC.2017.8024576","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024576","url":null,"abstract":"Data-buffering techniques currently used in time-division-multiple-access (TDMA) networks are described, including a short-delay technique called sub-framing. The sub-framing technique produces minimum user-to-user streamed-data throughput delays but results in data-transfer interruptions when TDMA time-slot resources have reassigned positions in the time frame. A simple modification to the sub-framing data-buffering technique is proposed, and it accommodates dynamic TDMA time-slot resource reassignments without data-transfer interruptions. This proposed data buffering technique provides the same minimum user-to-user streamed-data throughput delay, which is approximately equal to the difference between the TDMA frame time and the TDMA time-slot duration, but does not have data-transfer interruptions when TDMA time-slot resources have reassigned positions in the time frame.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128998715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024555
Saw Lin, V. Ramachandran, Tinotenda Zinyama
Probing is a technique used for network monitoring and measurement applications, including detection of silent failures and performance analysis to verify compliance with service-level agreements. Many existing approaches to probing-path selection seek to minimize the number of probes (an NP-hard problem) or to minimize probing delay, but they fail to properly assess the network overhead imposed by the probing strategy. In this paper, we present an approximation algorithm for probing-path selection that simultaneously considers two types of overhead-minimization objectives. Our approach can be customized to achieve a desired balance between the number of probes used overall and the effect on individual network components such as network devices and links. Through analysis and simulation on various topologies and under a variety of settings, we assess the effect of customization on a number of network-overhead measures. Compared to previous work, our approach efficiently computes a set of end-to-end probing paths fully covering a network without unduly overloading it.
{"title":"Balancing overhead-minimization objectives in network probing-path selection","authors":"Saw Lin, V. Ramachandran, Tinotenda Zinyama","doi":"10.1109/ISCC.2017.8024555","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024555","url":null,"abstract":"Probing is a technique used for network monitoring and measurement applications, including detection of silent failures and performance analysis to verify compliance with service-level agreements. Many existing approaches to probing-path selection seek to minimize the number of probes (an NP-hard problem) or to minimize probing delay, but they fail to properly assess the network overhead imposed by the probing strategy. In this paper, we present an approximation algorithm for probing-path selection that simultaneously considers two types of overhead-minimization objectives. Our approach can be customized to achieve a desired balance between the number of probes used overall and the effect on individual network components such as network devices and links. Through analysis and simulation on various topologies and under a variety of settings, we assess the effect of customization on a number of network-overhead measures. Compared to previous work, our approach efficiently computes a set of end-to-end probing paths fully covering a network without unduly overloading it.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129358523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024582
A. Ioannidi, D. Gavalas, Vlasios Kasapakis
The recent developments in the area of mobile technologies have enabled the diffusion of augmented reality. This article explores new possibilities for leveraging the power of augmented reality, further to its typical use as a means for projecting virtual content over physical objects. That is, it investigates the potential of augmented reality as a tool for participatory content creation and sharing. In order to demonstrate this idea, we have developed Flaneur, a mobile augmented reality application which invites users to wander around a city to discover and highlight its architectural assets. Flaneur serves both as an architectural heritage guide and as a crowdsourcing platform. In particular, further to consuming content edited by administrators, users are enabled to share their knowledge and experiences about heritage buildings through uploading textual and graphical annotation, which is later delivered to peer users as augmented context. The platform allows the documentation of architecture-relevant content either for whole building blocks or even for specific architectural and decorative elements. Moreover, it enables the precise placement and arrangement of digital content over the physical asset to be augmented. Flaneur has been evaluated though field trials, which provided valuable insights as regards the effectiveness, utility and restrictions of the application.
{"title":"Flaneur: Augmented exploration of the architectural urbanscape","authors":"A. Ioannidi, D. Gavalas, Vlasios Kasapakis","doi":"10.1109/ISCC.2017.8024582","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024582","url":null,"abstract":"The recent developments in the area of mobile technologies have enabled the diffusion of augmented reality. This article explores new possibilities for leveraging the power of augmented reality, further to its typical use as a means for projecting virtual content over physical objects. That is, it investigates the potential of augmented reality as a tool for participatory content creation and sharing. In order to demonstrate this idea, we have developed Flaneur, a mobile augmented reality application which invites users to wander around a city to discover and highlight its architectural assets. Flaneur serves both as an architectural heritage guide and as a crowdsourcing platform. In particular, further to consuming content edited by administrators, users are enabled to share their knowledge and experiences about heritage buildings through uploading textual and graphical annotation, which is later delivered to peer users as augmented context. The platform allows the documentation of architecture-relevant content either for whole building blocks or even for specific architectural and decorative elements. Moreover, it enables the precise placement and arrangement of digital content over the physical asset to be augmented. Flaneur has been evaluated though field trials, which provided valuable insights as regards the effectiveness, utility and restrictions of the application.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116018633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024545
Diogo Machado, Tiago Paiva, I. Dutra, V. S. Costa, P. Brandão
Diabetes management is a complex and a sensible problem as each diabetic is a unique case with particular needs. The optimal solution would be a constant monitoring of the diabetic's values and automatically acting accordingly. We propose an approach that guides the user and analyses the data gathered to give individual advice. By using data mining algorithms and methods, we uncover hidden behaviour patterns that may lead to crisis situations. These patterns can then be transformed into logical rules, able to trigger in a particular context, and advise the user. We believe that this solution, is not only beneficial for the diabetic, but also for the doctor accompanying the situation. The advice and rules are useful input that the medical expert can use while prescribing a particular treatment. During the data gathering phase, when the number of records is not enough to attain useful conclusions, a base set of logical rules, defined from medical protocols, directives and/or advice, is responsible for advise and guiding the user. The proposed system will accompany the user at start with generic advice, and with constant learning, advise the user more specifically. We discuss this approach describing the architecture of the system, its base rules and data mining component. The system is to be incorporated in a currently developed diabetes management application for Android.
{"title":"Managing diabetes: Pattern discovery and counselling supported by user data in a mobile platform","authors":"Diogo Machado, Tiago Paiva, I. Dutra, V. S. Costa, P. Brandão","doi":"10.1109/ISCC.2017.8024545","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024545","url":null,"abstract":"Diabetes management is a complex and a sensible problem as each diabetic is a unique case with particular needs. The optimal solution would be a constant monitoring of the diabetic's values and automatically acting accordingly. We propose an approach that guides the user and analyses the data gathered to give individual advice. By using data mining algorithms and methods, we uncover hidden behaviour patterns that may lead to crisis situations. These patterns can then be transformed into logical rules, able to trigger in a particular context, and advise the user. We believe that this solution, is not only beneficial for the diabetic, but also for the doctor accompanying the situation. The advice and rules are useful input that the medical expert can use while prescribing a particular treatment. During the data gathering phase, when the number of records is not enough to attain useful conclusions, a base set of logical rules, defined from medical protocols, directives and/or advice, is responsible for advise and guiding the user. The proposed system will accompany the user at start with generic advice, and with constant learning, advise the user more specifically. We discuss this approach describing the architecture of the system, its base rules and data mining component. The system is to be incorporated in a currently developed diabetes management application for Android.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116894495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024613
Saja Al-Mamoori, A. Jaekel, S. Bandyopadhyay
Cloud computing depends critically on large data centers connected by a high-speed optical network. A fault-tolerant communication scheme to handle requests for communication in such a system is essential. In this paper we have proposed an optimal approach to the problem of developing a path-protection scheme to handle communication requests in data center (DC) networks. We have formulated our problem as an Integer Linear Program (ILP). We have studied our approach using simulations, varying parameters, such as the number of DCs, and the number of disasters. Our simulations show that considering additional disasters do not add significantly to the cost of the solution, which means significant resource savings while supporting users' demands.
{"title":"Designing resilient WDM data center networks for dynamic lightpath demands","authors":"Saja Al-Mamoori, A. Jaekel, S. Bandyopadhyay","doi":"10.1109/ISCC.2017.8024613","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024613","url":null,"abstract":"Cloud computing depends critically on large data centers connected by a high-speed optical network. A fault-tolerant communication scheme to handle requests for communication in such a system is essential. In this paper we have proposed an optimal approach to the problem of developing a path-protection scheme to handle communication requests in data center (DC) networks. We have formulated our problem as an Integer Linear Program (ILP). We have studied our approach using simulations, varying parameters, such as the number of DCs, and the number of disasters. Our simulations show that considering additional disasters do not add significantly to the cost of the solution, which means significant resource savings while supporting users' demands.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117185686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024702
Nguyen Viet Ha, K. Kumazoe, K. Tsukamoto, M. Tsuru
Transmission Control Protocol (TCP) with Network Coding (TCP/NC) was designed to recover the lost packets without TCP retransmission to improve the goodput performance in lossy networks. However, TCP/NC is too costly to be implemented in some types of end devices, e.g., with less memory and power. In addition, TCP/NC across loss-free but thin networks may waste scarce link bandwidth due to the redundant combination packets sacrificed for the lossy network. In this paper, we propose the TCP/NC tunnel to convey end-to-end TCP sessions on a single TCP/NC flow traversing a lossy network between two special gateways without per-flow management. We implemented and validated our proposal in Network Simulator 3, in which each gateway runs a reinforced version of TCP/NC that we previously developed. The results show that the proposed TCP/NC tunnel can mitigate the goodput degradation of end-to-end TCP sessions traversing a lossy network without any change in TCP on each end host.
TCP/NC (Transmission Control Protocol with Network Coding, TCP/NC)是为了在不重传的情况下恢复丢失的数据包而设计的,以提高在有损网络中的良好传输性能。然而,TCP/NC在某些类型的终端设备中实现成本太高,例如,内存和功率更少。此外,TCP/NC跨无损耗但瘦的网络可能会浪费稀缺的链路带宽,因为有损耗的网络牺牲了冗余的组合数据包。在本文中,我们提出了TCP/NC隧道,在单个TCP/NC流上传输端到端的TCP会话,该流穿过两个特殊网关之间的损耗网络,而不需要对每个流进行管理。我们在Network Simulator 3中实现并验证了我们的建议,其中每个网关都运行我们之前开发的TCP/NC的增强版本。结果表明,所提出的TCP/NC隧道可以减轻端到端TCP会话在有损耗网络中的性能下降,而无需改变每个端主机上的TCP。
{"title":"Masking lossy networks by TCP tunnel with Network Coding","authors":"Nguyen Viet Ha, K. Kumazoe, K. Tsukamoto, M. Tsuru","doi":"10.1109/ISCC.2017.8024702","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024702","url":null,"abstract":"Transmission Control Protocol (TCP) with Network Coding (TCP/NC) was designed to recover the lost packets without TCP retransmission to improve the goodput performance in lossy networks. However, TCP/NC is too costly to be implemented in some types of end devices, e.g., with less memory and power. In addition, TCP/NC across loss-free but thin networks may waste scarce link bandwidth due to the redundant combination packets sacrificed for the lossy network. In this paper, we propose the TCP/NC tunnel to convey end-to-end TCP sessions on a single TCP/NC flow traversing a lossy network between two special gateways without per-flow management. We implemented and validated our proposal in Network Simulator 3, in which each gateway runs a reinforced version of TCP/NC that we previously developed. The results show that the proposed TCP/NC tunnel can mitigate the goodput degradation of end-to-end TCP sessions traversing a lossy network without any change in TCP on each end host.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115418881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024607
Xu Bai, Lei Jiang, Qiong Dai, Jiajia Yang, Jianlong Tan
Cooperation of software and hardware with hybrid architectures, such as Xilinx Zynq SoC combining ARM CPU and FPGA fabric, is a high-performance and low-power platform for accelerating RSA Algorithm. This paper adopts the none-subtraction Montgomery algorithm and the Chinese Remainder Theorem (CRT) to implement high-speed RSA processors, and deploys a 48-node cluster infrastructure based on Zynq SoC to achieve extremely high scalability and throughput of RSA computing. In this design, we use the ARM to implement node-to-node communication with the Message Passing Interface (MPI) while use the FPGA to handle complex calculation. Finally, the experimental results show that the overall performance is linear with the number of nodes. And the cluster achieves 6×∼9× speedup against a multi-core desktop (Intel i7-3770) and comparable performance to a many-core server (288-core). In addition, we gain up to 2.5× energy efficiency compared to these two traditional platforms.
{"title":"Acceleration of RSA processes based on hybrid ARM-FPGA cluster","authors":"Xu Bai, Lei Jiang, Qiong Dai, Jiajia Yang, Jianlong Tan","doi":"10.1109/ISCC.2017.8024607","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024607","url":null,"abstract":"Cooperation of software and hardware with hybrid architectures, such as Xilinx Zynq SoC combining ARM CPU and FPGA fabric, is a high-performance and low-power platform for accelerating RSA Algorithm. This paper adopts the none-subtraction Montgomery algorithm and the Chinese Remainder Theorem (CRT) to implement high-speed RSA processors, and deploys a 48-node cluster infrastructure based on Zynq SoC to achieve extremely high scalability and throughput of RSA computing. In this design, we use the ARM to implement node-to-node communication with the Message Passing Interface (MPI) while use the FPGA to handle complex calculation. Finally, the experimental results show that the overall performance is linear with the number of nodes. And the cluster achieves 6×∼9× speedup against a multi-core desktop (Intel i7-3770) and comparable performance to a many-core server (288-core). In addition, we gain up to 2.5× energy efficiency compared to these two traditional platforms.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114265015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024526
Crescenzo Diomaiuta, Maria Mercorella, Mario Ciampi, G. Pietro
Clinical summarization means the collection and synthesis of a patient's significant data, undertaken in order to support health-care providers in the process of patient care. Considering that medical information comes from multiple sources, a system for the automatic generation of problem lists could prove to be very effective in terms of saving time in the analysis of large amounts of medical data. In this paper, we propose a system able to acquire and present relevant references to medical disorders from a patient's history, producing a subject-oriented summary. The implemented system relies on an NLP pipeline, for the extraction of relevant medical entities contained in narrative health records, and on several queries, necessary for the scanning of structured documents. The tool aggregates any medical problems, performed procedures, and prescribed medications, providing the healthcare practitioner with a visual summary of the patient's data.
{"title":"A novel system for the automatic extraction of a patient problem summary","authors":"Crescenzo Diomaiuta, Maria Mercorella, Mario Ciampi, G. Pietro","doi":"10.1109/ISCC.2017.8024526","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024526","url":null,"abstract":"Clinical summarization means the collection and synthesis of a patient's significant data, undertaken in order to support health-care providers in the process of patient care. Considering that medical information comes from multiple sources, a system for the automatic generation of problem lists could prove to be very effective in terms of saving time in the analysis of large amounts of medical data. In this paper, we propose a system able to acquire and present relevant references to medical disorders from a patient's history, producing a subject-oriented summary. The implemented system relies on an NLP pipeline, for the extraction of relevant medical entities contained in narrative health records, and on several queries, necessary for the scanning of structured documents. The tool aggregates any medical problems, performed procedures, and prescribed medications, providing the healthcare practitioner with a visual summary of the patient's data.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114583450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-07-01DOI: 10.1109/ISCC.2017.8024538
Paolo Zampognaro, R. Buys, Deirdre M J Walsh, C. Woods, Fabio Melillo
Over the last years, due to the emergency of new challenges in the area of the health care domain, particular emphasis was dedicated to the application of ICT in this sector. This, in turn, stimulated the analysis over the software requirements engineering techniques and their applicability in this context. The efficient application of the use-case based technique, within the PATHway project user requirements elicitation and formalisation activities, is here described. Efficiency has been reached by means of (i) a light and progressive introduction of UCs (Use Cases) instrument to the clinical teams by exploiting informal stories (i.e. anecdotes), (ii) a careful evaluation of the best UC description structure and, finally, (iii) an introduction of co-design moments with the final users (i.e. the patients) to speed up the UCs adaptation by the two main involved teams (i.e. technical team and clinical team). The qualitative results demonstrate advantages and limits of such technique applied to the context of cardiovascular home rehabilitation. Additionally the study has highlighted a smooth integration between the distinct phases of the requirements engineering process which can lead, in general, to a return of investment.
{"title":"A Use Case based requirements specification approach to support the development of a rehabilitation system for CVD patients: The PATHway project","authors":"Paolo Zampognaro, R. Buys, Deirdre M J Walsh, C. Woods, Fabio Melillo","doi":"10.1109/ISCC.2017.8024538","DOIUrl":"https://doi.org/10.1109/ISCC.2017.8024538","url":null,"abstract":"Over the last years, due to the emergency of new challenges in the area of the health care domain, particular emphasis was dedicated to the application of ICT in this sector. This, in turn, stimulated the analysis over the software requirements engineering techniques and their applicability in this context. The efficient application of the use-case based technique, within the PATHway project user requirements elicitation and formalisation activities, is here described. Efficiency has been reached by means of (i) a light and progressive introduction of UCs (Use Cases) instrument to the clinical teams by exploiting informal stories (i.e. anecdotes), (ii) a careful evaluation of the best UC description structure and, finally, (iii) an introduction of co-design moments with the final users (i.e. the patients) to speed up the UCs adaptation by the two main involved teams (i.e. technical team and clinical team). The qualitative results demonstrate advantages and limits of such technique applied to the context of cardiovascular home rehabilitation. Additionally the study has highlighted a smooth integration between the distinct phases of the requirements engineering process which can lead, in general, to a return of investment.","PeriodicalId":106141,"journal":{"name":"2017 IEEE Symposium on Computers and Communications (ISCC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115303872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}