Connected road vehicles are about to create a massive Internet of mobile things, equipped with increasing sensing capabilities and autonomy. In particular on-board distance sensors allow for detecting free road-side parking spaces when passing by. Upon receiving parking information, other vehicles may efficiently navigate to free slots leading to decreased parking space search times. Yet, in real road situations, sensed information may be misleading due to the mobility of the sensor, driving on multi-lane roads, and unknown semantics of sensed free spaces. In this demo, we present a drive-by parking space sensing system consisting of a LIDAR optical distance sensor and a GPS receiver connected to a Raspberry Pi. By applying machine learning, parking situations are estimated. We demonstrate the effectiveness of our solution in standard parking situations, in presence of obstacles, and when overtaking bicycles and cars on multi-lane roads.
{"title":"Empowering road vehicles to learn parking situations based on optical sensor measurements","authors":"Markus Hiesmair, K. Hummel","doi":"10.1145/3131542.3140277","DOIUrl":"https://doi.org/10.1145/3131542.3140277","url":null,"abstract":"Connected road vehicles are about to create a massive Internet of mobile things, equipped with increasing sensing capabilities and autonomy. In particular on-board distance sensors allow for detecting free road-side parking spaces when passing by. Upon receiving parking information, other vehicles may efficiently navigate to free slots leading to decreased parking space search times. Yet, in real road situations, sensed information may be misleading due to the mobility of the sensor, driving on multi-lane roads, and unknown semantics of sensed free spaces. In this demo, we present a drive-by parking space sensing system consisting of a LIDAR optical distance sensor and a GPS receiver connected to a Raspberry Pi. By applying machine learning, parking situations are estimated. We demonstrate the effectiveness of our solution in standard parking situations, in presence of obstacles, and when overtaking bicycles and cars on multi-lane roads.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125580123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bernhard Mandl, Marius Stehling, T. Schmiedinger, Martin Adam
The aim of the poster is to highlight the opportunities of augmented reality (AR) for enhancing workplace learning. The poster presents a concept for workplace learning by incorporating the digital twin (DT) concept to support the operator during the machining task. By providing in-depth information about the workpiece and the machine, the operator will be supported in fulfilling the proposed task intime and at the defined level of quality. Preliminary data indicated, that visual support by AR technology reduces the processing time of simple tasks. The challenges when applying AR technology to industrial tasks were investigated in an additional case study.
{"title":"Enhancing workplace learning by augmented reality","authors":"Bernhard Mandl, Marius Stehling, T. Schmiedinger, Martin Adam","doi":"10.1145/3131542.3140265","DOIUrl":"https://doi.org/10.1145/3131542.3140265","url":null,"abstract":"The aim of the poster is to highlight the opportunities of augmented reality (AR) for enhancing workplace learning. The poster presents a concept for workplace learning by incorporating the digital twin (DT) concept to support the operator during the machining task. By providing in-depth information about the workpiece and the machine, the operator will be supported in fulfilling the proposed task intime and at the defined level of quality. Preliminary data indicated, that visual support by AR technology reduces the processing time of simple tasks. The challenges when applying AR technology to industrial tasks were investigated in an additional case study.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124461797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's industrial jobs require a skilled and trained workforce as tasks such as maintenance, service, and repair are becoming more complicated and more demanding. Therefore, both education and training for executing these tasks are becoming more important. Usually training is conducted on-site at designated training facilities with physical hardware. However, on-site hands-on training can be expensive as it requires designated training facilities that have to be maintained and need to be traveled to. With Augmented Reality (AR) becoming a substantial part of modern day manufacturing, using AR-based systems to train new workforces is becoming increasingly popular. In this paper, we investigate different training environments that use AR-based support during workforce training. We draw the design space for a shared collaborative AR-based learning space, and present the concept and implementation of HoloCollab which combines having a scalable virtual representation of an industrial scenario, such as assembly, with the benefit of having a trainer on-site with a trainee.
{"title":"HoloCollab: a shared virtual platform for physical assembly training using spatially-aware head-mounted displays","authors":"Markus Funk, Mareike Kritzler, F. Michahelles","doi":"10.1145/3131542.3131559","DOIUrl":"https://doi.org/10.1145/3131542.3131559","url":null,"abstract":"Today's industrial jobs require a skilled and trained workforce as tasks such as maintenance, service, and repair are becoming more complicated and more demanding. Therefore, both education and training for executing these tasks are becoming more important. Usually training is conducted on-site at designated training facilities with physical hardware. However, on-site hands-on training can be expensive as it requires designated training facilities that have to be maintained and need to be traveled to. With Augmented Reality (AR) becoming a substantial part of modern day manufacturing, using AR-based systems to train new workforces is becoming increasingly popular. In this paper, we investigate different training environments that use AR-based support during workforce training. We draw the design space for a shared collaborative AR-based learning space, and present the concept and implementation of HoloCollab which combines having a scalable virtual representation of an industrial scenario, such as assembly, with the benefit of having a trainer on-site with a trainee.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126321591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iori Mizutani, Mareike Kritzler, Kimberly García, F. Michahelles
Semantic technologies are a powerful tool able to give meaning to flat data by creating ontologies that link information systems that live separately, but which complement each other in order to provide a common understanding of a domain. However, these technologies are also often seen as difficult to profit from due to their limited availability to non-experts. Tools to manipulate and to interact with information contained in ontologies are restricted to the very tools experts use to create such ontologies, or in the best case to traditional user interfaces built for specific applications relying on ontologies. In this paper, we present an innovative way to access, manipulate, and interact with semantic information by combining this powerful data basis with the benefits of Augmented Reality technologies. We look at an industrial setting with role-based and intuitive access to a semantic source of contextualized information regarding workforce, asset and inventory management as well as assigned tasks. In this approach industrial managers and service technicians are supported during their decision making processes and execution of assigned tasks.
{"title":"Intuitive interaction with semantics using augmented reality: a case study of workforce management in an industrial setting","authors":"Iori Mizutani, Mareike Kritzler, Kimberly García, F. Michahelles","doi":"10.1145/3131542.3131550","DOIUrl":"https://doi.org/10.1145/3131542.3131550","url":null,"abstract":"Semantic technologies are a powerful tool able to give meaning to flat data by creating ontologies that link information systems that live separately, but which complement each other in order to provide a common understanding of a domain. However, these technologies are also often seen as difficult to profit from due to their limited availability to non-experts. Tools to manipulate and to interact with information contained in ontologies are restricted to the very tools experts use to create such ontologies, or in the best case to traditional user interfaces built for specific applications relying on ontologies. In this paper, we present an innovative way to access, manipulate, and interact with semantic information by combining this powerful data basis with the benefits of Augmented Reality technologies. We look at an industrial setting with role-based and intuitive access to a semantic source of contextualized information regarding workforce, asset and inventory management as well as assigned tasks. In this approach industrial managers and service technicians are supported during their decision making processes and execution of assigned tasks.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131123147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tatsuki Fujiwara, Yohei Nakano, J. Mitsugi, Yuusuke Kawakita, H. Ichikawa
Wireless and battery-less structural health monitoring (SHM) that detect structural damage at low cost are required. To achieve this, the use of multi-subcarrier multiple access (MSMA) communication method is being considered. In MSMA, time synchronization of sensing data is shifted owing to software defined radio (SDR) processing. Therefore, when an SHM monitoring method requiring time synchronization of sensing data is used, time synchronization taking SDR processing delay into account is necessary. In this study, we propose a system that estimates SDR processing delay by correlation detection and acquires time synchronization of sensing data. We measured SDR delay estimation with time accuracy by installing this system on an experimental object. Results showed that the error of the allowable processing delay estimation was different, and time synchronization can be achieved by performing sensing once by the SDR processing delay estimation method using correlation detection.
{"title":"SDR processing delay estimation applying correlation detection for structure health monitoring using multi-subcarrier multiple access","authors":"Tatsuki Fujiwara, Yohei Nakano, J. Mitsugi, Yuusuke Kawakita, H. Ichikawa","doi":"10.1145/3131542.3140273","DOIUrl":"https://doi.org/10.1145/3131542.3140273","url":null,"abstract":"Wireless and battery-less structural health monitoring (SHM) that detect structural damage at low cost are required. To achieve this, the use of multi-subcarrier multiple access (MSMA) communication method is being considered. In MSMA, time synchronization of sensing data is shifted owing to software defined radio (SDR) processing. Therefore, when an SHM monitoring method requiring time synchronization of sensing data is used, time synchronization taking SDR processing delay into account is necessary. In this study, we propose a system that estimates SDR processing delay by correlation detection and acquires time synchronization of sensing data. We measured SDR delay estimation with time accuracy by installing this system on an experimental object. Results showed that the error of the allowable processing delay estimation was different, and time synchronization can be achieved by performing sensing once by the SDR processing delay estimation method using correlation detection.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122798378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tina Frank, Marianne Pührerfellner, Barbara von Rechbach, David Lechner
The project "log files. stories from the internet of things" creates design proposals that explore new ways of visual communication between humans and machines in an imaginary near future. Our project imagines a series of artistic what-if scenarios using methods of speculative design which investigate the life of Internet of Things (IoT) black boxes. These artistic findings reveal possible implications of emerging technologies in the IoT, like privacy, transparency and participation. Visual stories provide a lens for exploring the social, ethical and aesthetic implications of society-technology relations in the era of connected things.
{"title":"log files. stories from the internet of things","authors":"Tina Frank, Marianne Pührerfellner, Barbara von Rechbach, David Lechner","doi":"10.1145/3131542.3140279","DOIUrl":"https://doi.org/10.1145/3131542.3140279","url":null,"abstract":"The project \"log files. stories from the internet of things\" creates design proposals that explore new ways of visual communication between humans and machines in an imaginary near future. Our project imagines a series of artistic what-if scenarios using methods of speculative design which investigate the life of Internet of Things (IoT) black boxes. These artistic findings reveal possible implications of emerging technologies in the IoT, like privacy, transparency and participation. Visual stories provide a lens for exploring the social, ethical and aesthetic implications of society-technology relations in the era of connected things.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128408976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented Reality (AR) is becoming more and more popular and many applications across multiple domains are developed on AR hardware such as the Microsoft HoloLens or similar Head-Mounted Displays (HMD). Most of the AR applications are visualizing information that was not visible before and enable interaction with this information using voice input, gaze tracking, and gesture interaction. However, to be consistent across all applications running on an AR device, the gestures that are available for developers are very limited. In our use case, using a Microsoft HoloLens, this is just an Air Tap and a Manipulation gesture. While this is great for users, as they only have to learn a defined amount of gestures, it is not always easy for developers to create a natural interaction experience, as the gestures that are considered natural, depend on the scenario. In this paper, we are using an additional sensor, a Microsoft Kinect, in order to allow users to interact naturally and intuitively with holographic content that is displayed on a HoloLens. As a proof-of-concept we give five examples of gestures using natural interaction for HoloLens.
{"title":"HoloLens is more than air Tap: natural and intuitive interaction with holograms","authors":"Markus Funk, Mareike Kritzler, F. Michahelles","doi":"10.1145/3131542.3140267","DOIUrl":"https://doi.org/10.1145/3131542.3140267","url":null,"abstract":"Augmented Reality (AR) is becoming more and more popular and many applications across multiple domains are developed on AR hardware such as the Microsoft HoloLens or similar Head-Mounted Displays (HMD). Most of the AR applications are visualizing information that was not visible before and enable interaction with this information using voice input, gaze tracking, and gesture interaction. However, to be consistent across all applications running on an AR device, the gestures that are available for developers are very limited. In our use case, using a Microsoft HoloLens, this is just an Air Tap and a Manipulation gesture. While this is great for users, as they only have to learn a defined amount of gestures, it is not always easy for developers to create a natural interaction experience, as the gestures that are considered natural, depend on the scenario. In this paper, we are using an additional sensor, a Microsoft Kinect, in order to allow users to interact naturally and intuitively with holographic content that is displayed on a HoloLens. As a proof-of-concept we give five examples of gestures using natural interaction for HoloLens.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114453892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing number and variety of IoT (Internet of Things) devices produce a huge amount of diverse data upon which applications are built. Depending on the specific use case, the sampling rate of IoT sensors may be high, thus leading the devices to fast energy and storage depletion. One option to address these issues is to perform data reduction at the source nodes so as to decrease both energy consumption and used storage. Most of current available solutions perform data reduction only at a single tier of the IoT architecture (e.g., at gateways), or simply operate a-posteriori once the data transmission has already taken place (i.e., at the cloud data center). This paper proposes a multi-tier data reduction mechanism deployed at both gateways and the edge tier. At the gateways, we apply the PIP (Perceptually Important Point) method to represent the features of a time series by using a finite amount of data. We extend such an algorithm by introducing several techniques, namely interval restriction, dynamic caching and weighted sequence selection. At the edge tier, we propose a data fusion method based on an optimal set selection. Such a method employs a simple strategy to fuse the data in the same time domain for a specific location. Finally, we evaluate the performance of the proposed filtering and the fusion technique. The obtained results demonstrate the efficiency of the proposed mechanism in terms of time and accuracy.
{"title":"A multi-tier data reduction mechanism for IoT sensors","authors":"Liang Feng, P. Kortoçi, Yong Liu","doi":"10.1145/3131542.3131557","DOIUrl":"https://doi.org/10.1145/3131542.3131557","url":null,"abstract":"The increasing number and variety of IoT (Internet of Things) devices produce a huge amount of diverse data upon which applications are built. Depending on the specific use case, the sampling rate of IoT sensors may be high, thus leading the devices to fast energy and storage depletion. One option to address these issues is to perform data reduction at the source nodes so as to decrease both energy consumption and used storage. Most of current available solutions perform data reduction only at a single tier of the IoT architecture (e.g., at gateways), or simply operate a-posteriori once the data transmission has already taken place (i.e., at the cloud data center). This paper proposes a multi-tier data reduction mechanism deployed at both gateways and the edge tier. At the gateways, we apply the PIP (Perceptually Important Point) method to represent the features of a time series by using a finite amount of data. We extend such an algorithm by introducing several techniques, namely interval restriction, dynamic caching and weighted sequence selection. At the edge tier, we propose a data fusion method based on an optimal set selection. Such a method employs a simple strategy to fuse the data in the same time domain for a specific location. Finally, we evaluate the performance of the proposed filtering and the fusion technique. The obtained results demonstrate the efficiency of the proposed mechanism in terms of time and accuracy.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131364561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a method of Role-based Ontology-driven Knowledge Representation for IoT-enabled Waste Management. The creation of the ontology, rules, and a multistage data processing method that allows extracting knowledge about specific nontrivial situations on its basis are described. The implementation of the proposed system in the form of a web application, the content types of which are based on ontology, and data processing occurs according to the proposed algorithm are discussed and demonstrated.
{"title":"Role-based ontology-driven knowledge representation for IoT-enabled waste management","authors":"I. Sosunova, A. Zaslavsky, P. Fedchenkov","doi":"10.1145/3131542.3140272","DOIUrl":"https://doi.org/10.1145/3131542.3140272","url":null,"abstract":"We propose a method of Role-based Ontology-driven Knowledge Representation for IoT-enabled Waste Management. The creation of the ontology, rules, and a multistage data processing method that allows extracting knowledge about specific nontrivial situations on its basis are described. The implementation of the proposed system in the form of a web application, the content types of which are based on ontology, and data processing occurs according to the proposed algorithm are discussed and demonstrated.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124643544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a case in which a small company with no competence in the Internet of Things has attempted to develop a connected product. Many of the issues that have come up around this process are not centered around the technology or the technical solution. Instead, the IoT brings additional challenges connected to the change of the customer-provider relationship, the need for a new business model, the need to understand the IoT ecosystem to make a product fit in, and the challenge of getting access to all the different competences needed. Our results suggest that governmental strategies, funding systems, investors and other stakeholders need to broaden the perspective on the IoT from a technological focus to include many other aspects that must be in place for companies to succeed in the IoT world, such as business models and customer relationship, as well as the positioning of a product in an IoT ecosystem.
{"title":"Challenges for SMEs entering the IoT world: success is about so much more than technology","authors":"Stina Nylander, Anders Wallberg, Pär Hansson","doi":"10.1145/3131542.3131547","DOIUrl":"https://doi.org/10.1145/3131542.3131547","url":null,"abstract":"We describe a case in which a small company with no competence in the Internet of Things has attempted to develop a connected product. Many of the issues that have come up around this process are not centered around the technology or the technical solution. Instead, the IoT brings additional challenges connected to the change of the customer-provider relationship, the need for a new business model, the need to understand the IoT ecosystem to make a product fit in, and the challenge of getting access to all the different competences needed. Our results suggest that governmental strategies, funding systems, investors and other stakeholders need to broaden the perspective on the IoT from a technological focus to include many other aspects that must be in place for companies to succeed in the IoT world, such as business models and customer relationship, as well as the positioning of a product in an IoT ecosystem.","PeriodicalId":166408,"journal":{"name":"Proceedings of the Seventh International Conference on the Internet of Things","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115670579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}