The CarStar interface and subsequent user evaluation show a correlation between visually analogous representations of functionality and ease of use. Tapping into users current knowledge-base about automobiles seems to be among the more effective methods of demonstrating functionality through the act of displaying the different controls and data feeds of the system. Smart devices equipped with such an interface can enhance users' experience by providing functionality through recognition. Future work would be improve the design and implement the interface on a real touch screen mobile device such as Apple iPhone or the HTC Touch and to do further evaluation.
{"title":"An Intuitive Touch Screen Interface for Car Remote Control","authors":"Michael Moski, P. Atrey","doi":"10.1145/2661704.2661710","DOIUrl":"https://doi.org/10.1145/2661704.2661710","url":null,"abstract":"The CarStar interface and subsequent user evaluation show a correlation between visually analogous representations of functionality and ease of use. Tapping into users current knowledge-base about automobiles seems to be among the more effective methods of demonstrating functionality through the act of displaying the different controls and data feeds of the system. Smart devices equipped with such an interface can enhance users' experience by providing functionality through recognition. Future work would be improve the design and implement the interface on a real touch screen mobile device such as Apple iPhone or the HTC Touch and to do further evaluation.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115050199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A human surrogate is any object, virtual, physical or even a blend of virtual and physical, that acts as a stand-in for a human. Surrogates can be directly controlled or just given a specific task to carry out on behalf of a human. In the context of a virtual environment, a surrogate is more often referred to as an avatar, reflecting that it is intended to represent the person in some context, rather than just carrying out a specific task on his or her behalf. In essence, an avatar is a manifestation of the human who is "inhabiting" it. A person's avatar can look like the inhabiter or look like some other person, or even be a personification of some non-human character Generally, the inhabiter controls all critical actions, verbal and non-verbal, of his or her avatar, although the specific manifestation of the avatar may place constraints on how it carries out some of these desired behaviors. The research presented here involves the use of avatars and other forms of human surrogates as remote entities that can be employed for situations that involve interpersonal skills. More specifically, we focus on the use of avatars in collaborative situations and in the delivery of training and education, especially when physical co-presence is difficult or even undesirable. In these contexts, difficulty most often relates to spatial separation of the human participants and undesirability relates to the need to have one's surrogate present an appearance and exist in a context that differs from one's own. For other situations, such as carrying out dangerous or humanly impossible physical tasks, a remote avatar may be required for safety or even successful completion. In Smart Cities, human surrogates and avatars can help make people more effective, safer, better educated and more facile at learning new skills required for employment and other life events.
{"title":"Human Surrogates: Remote Presence for Collaboration and Education in Smart Cities","authors":"C. Hughes","doi":"10.1145/2661704.2661712","DOIUrl":"https://doi.org/10.1145/2661704.2661712","url":null,"abstract":"A human surrogate is any object, virtual, physical or even a blend of virtual and physical, that acts as a stand-in for a human. Surrogates can be directly controlled or just given a specific task to carry out on behalf of a human. In the context of a virtual environment, a surrogate is more often referred to as an avatar, reflecting that it is intended to represent the person in some context, rather than just carrying out a specific task on his or her behalf. In essence, an avatar is a manifestation of the human who is \"inhabiting\" it. A person's avatar can look like the inhabiter or look like some other person, or even be a personification of some non-human character Generally, the inhabiter controls all critical actions, verbal and non-verbal, of his or her avatar, although the specific manifestation of the avatar may place constraints on how it carries out some of these desired behaviors.\u0000 The research presented here involves the use of avatars and other forms of human surrogates as remote entities that can be employed for situations that involve interpersonal skills. More specifically, we focus on the use of avatars in collaborative situations and in the delivery of training and education, especially when physical co-presence is difficult or even undesirable. In these contexts, difficulty most often relates to spatial separation of the human participants and undesirability relates to the need to have one's surrogate present an appearance and exist in a context that differs from one's own. For other situations, such as carrying out dangerous or humanly impossible physical tasks, a remote avatar may be required for safety or even successful completion. In Smart Cities, human surrogates and avatars can help make people more effective, safer, better educated and more facile at learning new skills required for employment and other life events.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129418326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chen-Chih Liao, Ting-Fang Hou, Ting-Yi Lin, Y. Cheng, A. Erbad, Cheng-Hsin Hsu, N. Venkatasubramanian
We consider the problem of efficiently using smartphone users to augment the stationary infrastructure sensors for better situation awareness in smart cities. We envision a dynamic sensing platform that intelligently assigns sensing tasks to volunteered smartphone users, in order to answer queries by performing sensing tasks at specific locations that may not be covered by in-situ infrastructure sensors. We mathematically formulate the problem into an integer programming problem to minimize the overall energy consumption while satisfying the required query accuracy. We present an optimal algorithm to solve this problem using an existing computationally expensive optimization solver. To reduce the running time, we also propose a more practical heuristic algorithm. Our trace-driven simulation results reveal the benefits of our proposed heuristic algorithm, it: (i) finishes all the tasks, (ii) achieves 6 times shorter response time, and (iii) performs better with more volunteers. In contrast, exclusively using in-situ sensors completes 6% of the tasks, while using in-situ sensors with opportunistic sensing (without user intervention) completes 20% of the tasks. Our prototype system is validated in a user study and receives fairly positive feedback from the smartphone users who utilize it to submit and answer various spatial/temporal dependent queries.
{"title":"SAIS: Smartphone Augmented Infrastructure Sensing for Public Safety and Sustainability in Smart Cities","authors":"Chen-Chih Liao, Ting-Fang Hou, Ting-Yi Lin, Y. Cheng, A. Erbad, Cheng-Hsin Hsu, N. Venkatasubramanian","doi":"10.1145/2661704.2661706","DOIUrl":"https://doi.org/10.1145/2661704.2661706","url":null,"abstract":"We consider the problem of efficiently using smartphone users to augment the stationary infrastructure sensors for better situation awareness in smart cities. We envision a dynamic sensing platform that intelligently assigns sensing tasks to volunteered smartphone users, in order to answer queries by performing sensing tasks at specific locations that may not be covered by in-situ infrastructure sensors. We mathematically formulate the problem into an integer programming problem to minimize the overall energy consumption while satisfying the required query accuracy. We present an optimal algorithm to solve this problem using an existing computationally expensive optimization solver. To reduce the running time, we also propose a more practical heuristic algorithm. Our trace-driven simulation results reveal the benefits of our proposed heuristic algorithm, it: (i) finishes all the tasks, (ii) achieves 6 times shorter response time, and (iii) performs better with more volunteers. In contrast, exclusively using in-situ sensors completes 6% of the tasks, while using in-situ sensors with opportunistic sensing (without user intervention) completes 20% of the tasks. Our prototype system is validated in a user study and receives fairly positive feedback from the smartphone users who utilize it to submit and answer various spatial/temporal dependent queries.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121469246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exponentially growing trend of multi-device ownership creates both a need and an opportunity to migrate content and on-going user tasks from one device to another of the same user, which is more suited for the current context of use or the task at hand. One intuitive interaction method for enabling this transfer is that of a virtual display that combines the physical screens of two or more participating devices. In this work, we propose a novel technique to create a virtual screen for multi-device interaction, multimedia-sharing and collaborative work use case. To this end, participating devices need to know their relative proximity and orientation. We exploit the rich video information from the back facing camera of a device for calibration, so to enable a natural form of virtual desktop creation, without requiring any extra equipment or special environments. An extensive experimental evaluation determined the feasibility of the presented approach for seamless multi-device communications.
{"title":"Multi-device Interaction for Content Sharing","authors":"V. Conotter, G. Grassel, F. D. Natale","doi":"10.1145/2661704.2661705","DOIUrl":"https://doi.org/10.1145/2661704.2661705","url":null,"abstract":"The exponentially growing trend of multi-device ownership creates both a need and an opportunity to migrate content and on-going user tasks from one device to another of the same user, which is more suited for the current context of use or the task at hand. One intuitive interaction method for enabling this transfer is that of a virtual display that combines the physical screens of two or more participating devices. In this work, we propose a novel technique to create a virtual screen for multi-device interaction, multimedia-sharing and collaborative work use case. To this end, participating devices need to know their relative proximity and orientation. We exploit the rich video information from the back facing camera of a device for calibration, so to enable a natural form of virtual desktop creation, without requiring any extra equipment or special environments. An extensive experimental evaluation determined the feasibility of the presented approach for seamless multi-device communications.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115695672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, cloud computing and Internet of Things (IoT) have made their entrance in the pervasive healthcare field in smart city environment. However, the integration of IoTs and cloud computing in healthcare domain impose several technical challenges that have not yet received enough attention from the research community. Some of these challenges are reliable transmission of vital sign data to cloud, dynamic resource allocation to facilitate seamless access and processing of IoT data, and effective data mining techniques. In this paper, we propose a framework to address above challenging issues. In addition, we discuss the possible solutions to tackle some of these challenges in smart city environment.
{"title":"A Cloud-Assisted Internet of Things Framework for Pervasive Healthcare in Smart City Environment","authors":"M. Hassan, H. Albakr, Hmood Al-Dossari","doi":"10.1145/2661704.2661707","DOIUrl":"https://doi.org/10.1145/2661704.2661707","url":null,"abstract":"Recently, cloud computing and Internet of Things (IoT) have made their entrance in the pervasive healthcare field in smart city environment. However, the integration of IoTs and cloud computing in healthcare domain impose several technical challenges that have not yet received enough attention from the research community. Some of these challenges are reliable transmission of vital sign data to cloud, dynamic resource allocation to facilitate seamless access and processing of IoT data, and effective data mining techniques. In this paper, we propose a framework to address above challenging issues. In addition, we discuss the possible solutions to tackle some of these challenges in smart city environment.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"838 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116153144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid growth of urbanization has attracted attentions toward sustainability. Creating smart cities can contribute to the green growth of countries because of their socio-economic, socio-environmental and eco-efficiency benefits. Intelligent Transportation Systems are a key solution to traffic management in the way to create a smart city. Not only the huge amount of time and money is wasted behind traffic lights, but also many people lose their lives in ambulances due to late hospital arrivals. In the following paper, we introduce geo-fencing approach to help emergency cars pass traffic lights in the shortest time as possible. Our location of interest is a cross road that contains traffic signals, and our users are emergency cars. To create a geo-fence, modern mobile applications can either specify latitude and longitude of the four-way edges, or treat the geo-fence as an area with its boundaries. The user's vicinity from its current location to the nearby features can be detected and informed to the administrator. When an emergency car possessing a specific ID approaches the predefined traffic light enclosure, a report is sent from its iPhone or Android device to the cloud networks. When the vehicle is identified/ verified, a message is issued to the Android-based processors inside the smart traffic lights. The lights will remain green until the vehicle passes the intersection. The desired hospitals or police stations can also be informed of the emergency car's location and arrival time. In our simulation, we used Gimbal Geo-fence cloud and Google cloud for verification and security.
{"title":"Reducing Traffic Congestion Using Geo-fence Technology: Application for Emergency Car","authors":"S. Noei, Hugo Santana, A. Sargolzaei, M. Noei","doi":"10.1145/2661704.2661709","DOIUrl":"https://doi.org/10.1145/2661704.2661709","url":null,"abstract":"Rapid growth of urbanization has attracted attentions toward sustainability. Creating smart cities can contribute to the green growth of countries because of their socio-economic, socio-environmental and eco-efficiency benefits. Intelligent Transportation Systems are a key solution to traffic management in the way to create a smart city. Not only the huge amount of time and money is wasted behind traffic lights, but also many people lose their lives in ambulances due to late hospital arrivals. In the following paper, we introduce geo-fencing approach to help emergency cars pass traffic lights in the shortest time as possible. Our location of interest is a cross road that contains traffic signals, and our users are emergency cars. To create a geo-fence, modern mobile applications can either specify latitude and longitude of the four-way edges, or treat the geo-fence as an area with its boundaries. The user's vicinity from its current location to the nearby features can be detected and informed to the administrator. When an emergency car possessing a specific ID approaches the predefined traffic light enclosure, a report is sent from its iPhone or Android device to the cloud networks. When the vehicle is identified/ verified, a message is issued to the Android-based processors inside the smart traffic lights. The lights will remain green until the vehicle passes the intersection. The desired hospitals or police stations can also be informed of the emergency car's location and arrival time. In our simulation, we used Gimbal Geo-fence cloud and Google cloud for verification and security.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133527437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Guthier, Rajwa Alharthi, R. Abaalkhail, Abdulmotaleb El Saddik
Smart cities use various deployed sensors and aggregate their data to create a big picture of the live state of the city. This live state can be enhanced by incorporating the affective states of the citizens. In this work, we automatically detect the emotions of the city's inhabitants from geo-tagged posts on the social network Twitter. Emotions are represented as four-dimensional vectors of pleasantness, arousal, dominance and unpredictability. In a training phase, emotion-word hashtags in the messages are used as the ground truth emotion contained in a message. A neural network is trained by using the presence of words, hashtags and emoticons in the messages as features. During the live phase, these features are extracted from new geo-tagged Twitter messages and given as input to the neural network. This allows the estimation of a four-dimensional emotion vector for a new message. The detected emotions are aggregated over space and time and visualized on a map of the city.
{"title":"Detection and Visualization of Emotions in an Affect-Aware City","authors":"B. Guthier, Rajwa Alharthi, R. Abaalkhail, Abdulmotaleb El Saddik","doi":"10.1145/2661704.2661708","DOIUrl":"https://doi.org/10.1145/2661704.2661708","url":null,"abstract":"Smart cities use various deployed sensors and aggregate their data to create a big picture of the live state of the city. This live state can be enhanced by incorporating the affective states of the citizens. In this work, we automatically detect the emotions of the city's inhabitants from geo-tagged posts on the social network Twitter. Emotions are represented as four-dimensional vectors of pleasantness, arousal, dominance and unpredictability. In a training phase, emotion-word hashtags in the messages are used as the ground truth emotion contained in a message. A neural network is trained by using the presence of words, hashtags and emoticons in the messages as features. During the live phase, these features are extracted from new geo-tagged Twitter messages and given as input to the neural network. This allows the estimation of a four-dimensional emotion vector for a new message. The detected emotions are aggregated over space and time and visualized on a map of the city.","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131156607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To truly develop Smart Cities a combination of multi-media, human factors, and user-centered systems methodology and design principles will have to be applied. Large capital projects and development of Smart Cities could turn to the use of cloud, analytics, mobile, social and security solutions, which could change the outcomes of economic investments and employment opportunities. In addition, the 'Internet of Things', the interconnection of sensors, devices, and everyday objects, requires a standard platform and 'battle-tested' framework for the next generation of Smart Cities. Improved productivity, asset health, profitability, quality, employee safety, and environmental impact are the desired outcomes. Capitalizing on technology to deliver positive results and preventing 'black swan' events or accidents is a complex puzzle. Legacy infrastructure adopting new technologies, gaps in the workforce, regulatory guidelines, safety performance criteria, unexpected risks, and political challenges can add to the complexity and difficulty. We are finding ourselves in a dilemma where detailed specifications, changes and relationships among key elements in the market are needed but still are ambiguous, changing, and untraceable. In order to be successful, critical best practices in process, requirements, engineering, and risk modeling using interdisciplinary engineering practices could enable successful and rapid transformation. In response to these increasing challenges; governments, academics and industry are increasingly leveraging the systems and software engineering best practices developed in fail-safe industries such as nuclear power, aerospace, defense and capital intensive heavy industries, to aid in optimally balancing competing interests and dealing with increased complexity to deliver results. The presentation will introduce "Systems Thinking", "Continuous Engineering" and "Internet of Things" concepts and technologies to describe how they can be successfully leveraged in the transformation to Smart Cities. This presentation shows the need and importance of combining different points of view coming from different disciplines. This way of thinking is crucial to many areas, going beyond the Web and will in time lead to a new genre of computational social sciences that transcend specific applications. Systems Thinking or Systems Engineering differs from downstream engineering disciplines in that the outcomes for downstream engineering are implementations, while the outcomes for systems engineering are specification and governance. Systems engineering is a hybrid engineering discipline focused on the characterization of system properties, such as requirements, design, analysis, and process governance. The primary activities of systems engineering include: Identification of customer needs, Promoting engineering collaboration, Continuous validation and verification, Strategic knowledge reuse, and Systems governance throughout the life cycle. The Sys
{"title":"Industrial and Business Systems for Smart Cities","authors":"Ben A. Amaba","doi":"10.1145/2661704.2661713","DOIUrl":"https://doi.org/10.1145/2661704.2661713","url":null,"abstract":"To truly develop Smart Cities a combination of multi-media, human factors, and user-centered systems methodology and design principles will have to be applied. Large capital projects and development of Smart Cities could turn to the use of cloud, analytics, mobile, social and security solutions, which could change the outcomes of economic investments and employment opportunities. In addition, the 'Internet of Things', the interconnection of sensors, devices, and everyday objects, requires a standard platform and 'battle-tested' framework for the next generation of Smart Cities. Improved productivity, asset health, profitability, quality, employee safety, and environmental impact are the desired outcomes. Capitalizing on technology to deliver positive results and preventing 'black swan' events or accidents is a complex puzzle. Legacy infrastructure adopting new technologies, gaps in the workforce, regulatory guidelines, safety performance criteria, unexpected risks, and political challenges can add to the complexity and difficulty. We are finding ourselves in a dilemma where detailed specifications, changes and relationships among key elements in the market are needed but still are ambiguous, changing, and untraceable. In order to be successful, critical best practices in process, requirements, engineering, and risk modeling using interdisciplinary engineering practices could enable successful and rapid transformation. In response to these increasing challenges; governments, academics and industry are increasingly leveraging the systems and software engineering best practices developed in fail-safe industries such as nuclear power, aerospace, defense and capital intensive heavy industries, to aid in optimally balancing competing interests and dealing with increased complexity to deliver results. The presentation will introduce \"Systems Thinking\", \"Continuous Engineering\" and \"Internet of Things\" concepts and technologies to describe how they can be successfully leveraged in the transformation to Smart Cities.\u0000 This presentation shows the need and importance of combining different points of view coming from different disciplines. This way of thinking is crucial to many areas, going beyond the Web and will in time lead to a new genre of computational social sciences that transcend specific applications. Systems Thinking or Systems Engineering differs from downstream engineering disciplines in that the outcomes for downstream engineering are implementations, while the outcomes for systems engineering are specification and governance. Systems engineering is a hybrid engineering discipline focused on the characterization of system properties, such as requirements, design, analysis, and process governance. The primary activities of systems engineering include: Identification of customer needs, Promoting engineering collaboration, Continuous validation and verification, Strategic knowledge reuse, and Systems governance throughout the life cycle.\u0000 The Sys","PeriodicalId":219201,"journal":{"name":"EMASC '14","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121552593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}