Event is not a one-off occurrence of something, rather it is a continuous chain of small events. Along with event detection, the event evolution is equally important. Most existing methods ignore the evolution of the events and further fails to identify the influential spreaders of that event. Moreover, for the impact of the continuous developing events, predicting the linkages with other premature events, will help in the domains of economy such as commodities and stock markets. Twitter is widely used as an effective source of data collection and provides unique keywords (hashtags). However, it does not provide insights about the trend which makes it difficult to detect events. The research experiment environment and preliminary results presented in this paper are currently based upon the Brexit’s historical dataset. However, we plan to define proper way to simultaneously snapshot whole dynamic dataset at different times and eventually use real-time data. Our motivation for the research is to develop a new data analytic system and supporting techniques to find events for enhancing the decision-making,,,, process.
{"title":"Detecting Present Events to Predict Future: Detection and Evolution of Events on Twitter","authors":"Muhammad K. Ali, Lu Liu, Mohsen Farid","doi":"10.1109/SOSE.2018.00023","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00023","url":null,"abstract":"Event is not a one-off occurrence of something, rather it is a continuous chain of small events. Along with event detection, the event evolution is equally important. Most existing methods ignore the evolution of the events and further fails to identify the influential spreaders of that event. Moreover, for the impact of the continuous developing events, predicting the linkages with other premature events, will help in the domains of economy such as commodities and stock markets. Twitter is widely used as an effective source of data collection and provides unique keywords (hashtags). However, it does not provide insights about the trend which makes it difficult to detect events. The research experiment environment and preliminary results presented in this paper are currently based upon the Brexit’s historical dataset. However, we plan to define proper way to simultaneously snapshot whole dynamic dataset at different times and eventually use real-time data. Our motivation for the research is to develop a new data analytic system and supporting techniques to find events for enhancing the decision-making,,,, process.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"386 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124783340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
JointCloud is a new generation of cloud computing model which facilitates developers to customize cloud services. JCLedger is a blockchain based distributed ledger for JointCloud computing which can make cloud resources exchange more reliable and convenient, and it is the combination of JointCloud and BlockChain. One of the most important elements for creating JCLedger is the consensus algorithm. PoW (Proof of Work) is the consensus algorithm for Bitcoin, which is proved to be quite safe but needs much computing power. The original PoW is not suitable for JCLedger because the identities of participants are not equal in computing power, which may lead to accounting monopoly, and the throughput cannot satisfy the requirement of the massive and high-frequency transactions in JointCloud. In this paper, we propose a PoW based consensus algorithm called Proof of Participation and Fees (PoPF), which can save much computing power and handled transactions more efficiently for JCLedger. In our design, only the candidates have the opportunities for mining and the candidates are chosen according to the ranking which is determined by two factors: the times of the participant to be the accountant and the fees the participant has paid. The difficulty for candidates of solving the PoW hash puzzle is different (the higher ranking means easier for mining). The simulation experiment shows that the distribution of accountants is well-balanced, that is to say, the unequal computing power of participants in JointCloud is shielded, and all the users who have enough contribution in JCLedger will have the opportunities to be accountants.
{"title":"PoPF: A Consensus Algorithm for JCLedger","authors":"Xiang Fu, Huaimin Wang, Peichang Shi, Haibo Mi","doi":"10.1109/SOSE.2018.00034","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00034","url":null,"abstract":"JointCloud is a new generation of cloud computing model which facilitates developers to customize cloud services. JCLedger is a blockchain based distributed ledger for JointCloud computing which can make cloud resources exchange more reliable and convenient, and it is the combination of JointCloud and BlockChain. One of the most important elements for creating JCLedger is the consensus algorithm. PoW (Proof of Work) is the consensus algorithm for Bitcoin, which is proved to be quite safe but needs much computing power. The original PoW is not suitable for JCLedger because the identities of participants are not equal in computing power, which may lead to accounting monopoly, and the throughput cannot satisfy the requirement of the massive and high-frequency transactions in JointCloud. In this paper, we propose a PoW based consensus algorithm called Proof of Participation and Fees (PoPF), which can save much computing power and handled transactions more efficiently for JCLedger. In our design, only the candidates have the opportunities for mining and the candidates are chosen according to the ranking which is determined by two factors: the times of the participant to be the accountant and the fees the participant has paid. The difficulty for candidates of solving the PoW hash puzzle is different (the higher ranking means easier for mining). The simulation experiment shows that the distribution of accountants is well-balanced, that is to say, the unequal computing power of participants in JointCloud is shielded, and all the users who have enough contribution in JCLedger will have the opportunities to be accountants.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129033731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Venanzi, B. Kantarci, L. Foschini, P. Bellavista
The recent advances in telecommunications, wide availability of powerful and always-connected smart devices, and the wide adoption of cloud computing services are paving the way towards the concept of fog computing. An outstanding problem in this area is the availability of effective application-layer protocols between service seekers and providers at the Internet of Things (IoT) level because most IoT nodes run on batteries. Hence, IoT-Fog discovery service must be energy efficient to prolong the lifetime of smart objects. This paper presents a thorough study of our previously proposed MQTT-driven IoT-fog integration, namely the Power Efficient Node Discovery (PEND), and investigates the impact of dynamic arrival patterns on its performance. The MQTT broker serves as a fog node to trigger turning on/off the Bluetooth Low Energy (BLE) interfaces of the surrounding objects by monitoring their trajectories. Furthermore it leverages this additional location awareness to significantly reduce the power consumption of the device discovery process for mobile devices. With this motivation, we present a detailed performance study of PEND under various settings and provide in-depth discussions of some lessons about the obtainable energy saving and the effectiveness of the our enhanced BLE device discovery solution. The results we present are valuable for the fog community to design new optimizations and to refine the whole IoT device discovery process to the purpose of better efficiency and scalability.
{"title":"MQTT-Driven Node Discovery for Integrated IoT-Fog Settings Revisited: The Impact of Advertiser Dynamicity","authors":"Riccardo Venanzi, B. Kantarci, L. Foschini, P. Bellavista","doi":"10.1109/SOSE.2018.00013","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00013","url":null,"abstract":"The recent advances in telecommunications, wide availability of powerful and always-connected smart devices, and the wide adoption of cloud computing services are paving the way towards the concept of fog computing. An outstanding problem in this area is the availability of effective application-layer protocols between service seekers and providers at the Internet of Things (IoT) level because most IoT nodes run on batteries. Hence, IoT-Fog discovery service must be energy efficient to prolong the lifetime of smart objects. This paper presents a thorough study of our previously proposed MQTT-driven IoT-fog integration, namely the Power Efficient Node Discovery (PEND), and investigates the impact of dynamic arrival patterns on its performance. The MQTT broker serves as a fog node to trigger turning on/off the Bluetooth Low Energy (BLE) interfaces of the surrounding objects by monitoring their trajectories. Furthermore it leverages this additional location awareness to significantly reduce the power consumption of the device discovery process for mobile devices. With this motivation, we present a detailed performance study of PEND under various settings and provide in-depth discussions of some lessons about the obtainable energy saving and the effectiveness of the our enhanced BLE device discovery solution. The results we present are valuable for the fog community to design new optimizations and to refine the whole IoT device discovery process to the purpose of better efficiency and scalability.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129474698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The assurance of high quality requirements and system specifications takes a huge effort and requires semantically integrated requirements. Especially when using different representations of requirements, the semantic integration is quite difficult. Today’s requirements management tools integrate requirements only syntactically and, thereby, provide only limited features to support the requirements analyst during quality assurance. The following paper describes a concept for the semantic integration of requirements documented in different representations into one common model. This model allows analysing and measuring the quality of the integrated requirements by applying algorithms. The mentioned integration concept is applied to a use case driven requirements elicitation and analysis process using different representations like several UML diagrams or even template based textual requirements. The foundation to measure the quality of the integrated requirements is explained.
{"title":"Semantic Integration of System Specifications to Support Different System Engineering Disciplines","authors":"Alexander Rauh, Wolfgang Golubski, Stefan Queins","doi":"10.1109/SOSE.2018.00016","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00016","url":null,"abstract":"The assurance of high quality requirements and system specifications takes a huge effort and requires semantically integrated requirements. Especially when using different representations of requirements, the semantic integration is quite difficult. Today’s requirements management tools integrate requirements only syntactically and, thereby, provide only limited features to support the requirements analyst during quality assurance. The following paper describes a concept for the semantic integration of requirements documented in different representations into one common model. This model allows analysing and measuring the quality of the integrated requirements by applying algorithms. The mentioned integration concept is applied to a use case driven requirements elicitation and analysis process using different representations like several UML diagrams or even template based textual requirements. The foundation to measure the quality of the integrated requirements is explained.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133166120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroyuki Sato, S. Tanimoto, Toru Kobayashi, Atsushi Kanai
In the past decade, the framework of service polices has been established to ensure the transparency and accountability of operations of ICT services. This is generally called a "trust framework." The framework provides trust to its participants, in which economical operations are enabled. A modern framework of services are enpowered by IoT, where the environments often change in time. Here, we have to re-evaluate the policies when we observe changes in the environments. In this paper, we propose a formal model of adaptive policy evaluation framework that reflects the trust for the evaluation policy and the collection of environmental data by IoT devices. PDP runs under a given trust circle, receives assertions including policies of peers, and make a decision. Furthermore, we formalize the adaptive evalua- tion scheme of policies that reflects the dynamics of a trust circle which is affected by the environment of PDP. Monitor plays an essential role in controlling the trust circle by sensing the dynamic change of environments, which gives growth or shrink of a trust circle.
{"title":"Adaptive Policy Evaluation Framework for Flexible Service Provision","authors":"Hiroyuki Sato, S. Tanimoto, Toru Kobayashi, Atsushi Kanai","doi":"10.1109/SOSE.2018.00024","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00024","url":null,"abstract":"In the past decade, the framework of service polices has been established to ensure the transparency and accountability of operations of ICT services. This is generally called a \"trust framework.\" The framework provides trust to its participants, in which economical operations are enabled. A modern framework of services are enpowered by IoT, where the environments often change in time. Here, we have to re-evaluate the policies when we observe changes in the environments. In this paper, we propose a formal model of adaptive policy evaluation framework that reflects the trust for the evaluation policy and the collection of environmental data by IoT devices. PDP runs under a given trust circle, receives assertions including policies of peers, and make a decision. Furthermore, we formalize the adaptive evalua- tion scheme of policies that reflects the dynamics of a trust circle which is affected by the environment of PDP. Monitor plays an essential role in controlling the trust circle by sensing the dynamic change of environments, which gives growth or shrink of a trust circle.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130710135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Soares, R. Thiago, L. Azevedo, M. D. Bayser, Viviane Torres da Silva, Renato Cerqueira
When a consumer (or client) invokes a provider’s service that has a long time processing, a synchronous call is not a good option. This kind of communication blocks the consumer until the response arrives, and, besides, the invocation timeout can be reached raising a timeout error. Hence, an asynchronous call is more appropriate where the consumer calls the provider’s service and continues processing, and, when the provider’s service finishes it pushes the response to the consumer. There are different ways for implementing an asynchronous communication. A simple way is the consumer providing a callback function which the provider’s service invokes when it finishes the processing, performing a server push operation. However, there are many cases where this solution cannot be applied requiring other alternatives, e.g.: the consumer keeps up with the service execution checking for its readiness or state, and when it finishes, it calls another provider’s service to get the result; the consumer and provider keep an open connection for the asynchronous communication. This work analyzes the main server push technologies used for web development, presenting their weaknesses and strengths, and the existing main challenges. As a result, we provide a technologies comparison, and a classification based on multiple qualitative dimensions that helps one to choose the technology that fits its requirements and/or can be used to guide future researches in this field.
{"title":"Evaluation of Server Push Technologies for Scalable Client-Server Communication","authors":"E. Soares, R. Thiago, L. Azevedo, M. D. Bayser, Viviane Torres da Silva, Renato Cerqueira","doi":"10.1109/SOSE.2018.00010","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00010","url":null,"abstract":"When a consumer (or client) invokes a provider’s service that has a long time processing, a synchronous call is not a good option. This kind of communication blocks the consumer until the response arrives, and, besides, the invocation timeout can be reached raising a timeout error. Hence, an asynchronous call is more appropriate where the consumer calls the provider’s service and continues processing, and, when the provider’s service finishes it pushes the response to the consumer. There are different ways for implementing an asynchronous communication. A simple way is the consumer providing a callback function which the provider’s service invokes when it finishes the processing, performing a server push operation. However, there are many cases where this solution cannot be applied requiring other alternatives, e.g.: the consumer keeps up with the service execution checking for its readiness or state, and when it finishes, it calls another provider’s service to get the result; the consumer and provider keep an open connection for the asynchronous communication. This work analyzes the main server push technologies used for web development, presenting their weaknesses and strengths, and the existing main challenges. As a result, we provide a technologies comparison, and a classification based on multiple qualitative dimensions that helps one to choose the technology that fits its requirements and/or can be used to guide future researches in this field.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133405736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Microservices decouple network-accessible system components to support independent development, deployment, and scalability. The architecture of microservice-based software systems is typically not de?ned upfront but emerges by dynamically assembling services to systems. This makes it hard to extract component relations from static sources since component relationships may only become evident at runtime. Existing systems focus either on the static structure of service relations, neglecting runtime properties, or on (short-term) monitoring of runtime properties to detect errors. We present an approach to extract and analyze the architecture of a microservice-based software system based on a combination of static service information with infrastructure-related and aggregated runtime information.
{"title":"An Approach to Extract the Architecture of Microservice-Based Software Systems","authors":"Benjamin Mayer, R. Weinreich","doi":"10.1109/SOSE.2018.00012","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00012","url":null,"abstract":"Microservices decouple network-accessible system components to support independent development, deployment, and scalability. The architecture of microservice-based software systems is typically not de?ned upfront but emerges by dynamically assembling services to systems. This makes it hard to extract component relations from static sources since component relationships may only become evident at runtime. Existing systems focus either on the static structure of service relations, neglecting runtime properties, or on (short-term) monitoring of runtime properties to detect errors. We present an approach to extract and analyze the architecture of a microservice-based software system based on a combination of static service information with infrastructure-related and aggregated runtime information.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122327561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In JointCloud computing, the hypervisor used by each cloud plays a key role in providing services and protection for guest virtual machines (VMs). Unfortunately, the commodity hypervisor usually has a considerable attack surface and its memory is especially prone to be tampered with by an attacker who resides in one VM and then threatens the security of other co-located VMs. To mitigate such threat, previous solutions proposed an out-of-the-box design which leverages the nested virtualization to introduce a higher privileged software layer (a nested hypervisor) below the hypervisor. It also installs a security monitor into a trusted VM which is protected by the nested hypervisor and isolated from the untrusted hypervisor. The monitor is responsible for dynamically validating the behaviors of the untrusted hypervisor. Although monitoring from outside of the hypervisor can help ensure security, the large number of context switches caused by the nested virtualization incurs unacceptable overheads and makes this approach unsuitable for the cloud environment. In this paper, we introduce In-Hypervisor Memory Introspection (IHMI), an in-the-box way to monitor the hypervisor based on the nested virtualization. Our system puts the monitor into the untrusted hypervisor for efficiency while guaranteeing the same level of memory security as monitoring the hypervisor from a separated secure VM. By leveraging hardware virtualization features of current processors, IHMI isolates the monitor from the hypervisor via the nested page table and implements an efficient switch between them. Further, IHMI configures a uni-directional mapping for the monitor which allows the monitor to access the hypervisor’s memory at native speed while forbidding the hypervisor from accessing the monitor’s memory. Our IHMI system is currently still in an early stage and we report our design as well as preliminary evaluation results in this paper.
{"title":"Secure and Efficient In-Hypervisor Memory Introspection Using Nested Virtualization","authors":"Weiwen Tang, Zeyu Mi","doi":"10.1109/SOSE.2018.00031","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00031","url":null,"abstract":"In JointCloud computing, the hypervisor used by each cloud plays a key role in providing services and protection for guest virtual machines (VMs). Unfortunately, the commodity hypervisor usually has a considerable attack surface and its memory is especially prone to be tampered with by an attacker who resides in one VM and then threatens the security of other co-located VMs. To mitigate such threat, previous solutions proposed an out-of-the-box design which leverages the nested virtualization to introduce a higher privileged software layer (a nested hypervisor) below the hypervisor. It also installs a security monitor into a trusted VM which is protected by the nested hypervisor and isolated from the untrusted hypervisor. The monitor is responsible for dynamically validating the behaviors of the untrusted hypervisor. Although monitoring from outside of the hypervisor can help ensure security, the large number of context switches caused by the nested virtualization incurs unacceptable overheads and makes this approach unsuitable for the cloud environment. In this paper, we introduce In-Hypervisor Memory Introspection (IHMI), an in-the-box way to monitor the hypervisor based on the nested virtualization. Our system puts the monitor into the untrusted hypervisor for efficiency while guaranteeing the same level of memory security as monitoring the hypervisor from a separated secure VM. By leveraging hardware virtualization features of current processors, IHMI isolates the monitor from the hypervisor via the nested page table and implements an efficient switch between them. Further, IHMI configures a uni-directional mapping for the monitor which allows the monitor to access the hypervisor’s memory at native speed while forbidding the hypervisor from accessing the monitor’s memory. Our IHMI system is currently still in an early stage and we report our design as well as preliminary evaluation results in this paper.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131866515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent rise of cloud computing poses serious challenges for software engineering because it adds complexity not only to the platform and infrastructure, but to the software too. The demands on system scalability, performance and reliability are ever increasing. Industry solutions with widespread adoption include the microservices architecture, the container technology and the DevOps methodology. These approaches have changed software engineering practice in such a profound way that we argue that it is becoming a paradigm shift. In this paper, we examine the current support of programming languages for the key concepts behind the change in software engineering practice and argue that a novel programming language is required to support the new paradigm. We report a new programming language CAOPLE and its associated Integrated DevOps Environment CIDE and demonstrate the utility of both.
{"title":"If Docker is the Answer, What is the Question?","authors":"Hong Zhu, Ian Bayley","doi":"10.1109/SOSE.2018.00027","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00027","url":null,"abstract":"The recent rise of cloud computing poses serious challenges for software engineering because it adds complexity not only to the platform and infrastructure, but to the software too. The demands on system scalability, performance and reliability are ever increasing. Industry solutions with widespread adoption include the microservices architecture, the container technology and the DevOps methodology. These approaches have changed software engineering practice in such a profound way that we argue that it is becoming a paradigm shift. In this paper, we examine the current support of programming languages for the key concepts behind the change in software engineering practice and argue that a novel programming language is required to support the new paradigm. We report a new programming language CAOPLE and its associated Integrated DevOps Environment CIDE and demonstrate the utility of both.","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"309 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article presents challenges and solutions to testing systems based on the underlying products and services commonly referred to as the Internet of ‘things’ (IoT).
本文介绍了基于底层产品和服务的测试系统的挑战和解决方案,这些产品和服务通常被称为物联网(IoT)。
{"title":"Testing IoT Systems","authors":"J. Voas, D. R. Kuhn, P. Laplante","doi":"10.1109/SOSE.2018.00015","DOIUrl":"https://doi.org/10.1109/SOSE.2018.00015","url":null,"abstract":"This article presents challenges and solutions to testing systems based on the underlying products and services commonly referred to as the Internet of ‘things’ (IoT).","PeriodicalId":414464,"journal":{"name":"2018 IEEE Symposium on Service-Oriented System Engineering (SOSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115057987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}