S. Eun, Eun Suk No, Hyung Chul Kim, H. Yoon, S. Maeng
Multimedia refers to the composition of multiple monomedia which should be synchronized temporally and spatially. Over the past few years, some trials to describe the composition and synchronization have been made in a variety of applications and these descriptions have been used as frameworks in their applications. Although conventional works have succeeded in describing the synchronizations well, we indicate that they do not deal with the interactivity required in Interactive Multimedia Applications(IMA) like coursewares and hypermedia systems. In this paper, we propose a new specification method based on Milner’s Calculus of Communicating Systems(CCS) to cope with the interactivity. For showing the effectiveness of the specification, we design and implement a visual programming environment based on the specification mechanism, and propose a simple courseware as a programming example. Our approach has implications that the new specification mechanism can be adapted as a framework in various interactive applications and the visual programming environment acquires the benefits that it can handle user interactions and synchronizations with only visual expressions while additional texts for control commands should be augmented in conventional works.
多媒体是指多个单媒体在时间和空间上同步的组合。在过去的几年中,在各种应用程序中进行了一些描述组合和同步的尝试,这些描述已在其应用程序中用作框架。尽管传统的工作已经成功地很好地描述了同步,但我们指出,它们没有处理交互式多媒体应用程序(IMA)(如课件和超媒体系统)所需的交互性。本文提出了一种基于Milner’s Calculus of communication Systems(CCS)的新的规范方法来处理交互问题。为了显示规范的有效性,我们设计并实现了一个基于规范机制的可视化编程环境,并提出了一个简单的课件作为编程实例。我们的方法表明,新的规范机制可以作为各种交互式应用程序的框架,并且可视化编程环境可以获得仅使用可视化表达式处理用户交互和同步的好处,而控制命令的附加文本应该在传统工作中增加。
{"title":"Specification of multimedia composition and a visual programming environment","authors":"S. Eun, Eun Suk No, Hyung Chul Kim, H. Yoon, S. Maeng","doi":"10.1145/166266.166285","DOIUrl":"https://doi.org/10.1145/166266.166285","url":null,"abstract":"Multimedia refers to the composition of multiple monomedia which should be synchronized temporally and spatially. Over the past few years, some trials to describe the composition and synchronization have been made in a variety of applications and these descriptions have been used as frameworks in their applications. Although conventional works have succeeded in describing the synchronizations well, we indicate that they do not deal with the interactivity required in Interactive Multimedia Applications(IMA) like coursewares and hypermedia systems. In this paper, we propose a new specification method based on Milner’s Calculus of Communicating Systems(CCS) to cope with the interactivity. For showing the effectiveness of the specification, we design and implement a visual programming environment based on the specification mechanism, and propose a simple courseware as a programming example. Our approach has implications that the new specification mechanism can be adapted as a framework in various interactive applications and the visual programming environment acquires the benefits that it can handle user interactions and synchronizations with only visual expressions while additional texts for control commands should be augmented in conventional works.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"651 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113986352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an interface which helps people maintain a sense of spatial context while navigating virtual realworld scenes. First, a single panoramic image of the entire space is constructed from the separate partial, but detailed, images which constitute the original video sampling of the scene. The user can then navigate through this real-world data by manipulating either the panoramic overview or the original detailed views appearing in a separate window. Clicking or dragging the cursor over regions in the panoramic overview updates the corresponding detailed view. Using the panorama in this way frees the user from the traditional linear modes of interacting with virtual real-word scenes. In addition, interacting with the detailed view highlights the corresponding region in the panoramic overview and leaves a "trail" of the user's path through the space. These methods of visualizing and interacting with digital video described in this paper can also be applied to collections of digital video which do not correspond to a physical space such as standard linear movies.
{"title":"Panoramic overviews for navigating real-world scenes","authors":"Laura Teodosio, M. Mills","doi":"10.1145/166266.168422","DOIUrl":"https://doi.org/10.1145/166266.168422","url":null,"abstract":"This paper describes an interface which helps people maintain a sense of spatial context while navigating virtual realworld scenes. First, a single panoramic image of the entire space is constructed from the separate partial, but detailed, images which constitute the original video sampling of the scene. The user can then navigate through this real-world data by manipulating either the panoramic overview or the original detailed views appearing in a separate window. Clicking or dragging the cursor over regions in the panoramic overview updates the corresponding detailed view. Using the panorama in this way frees the user from the traditional linear modes of interacting with virtual real-word scenes. In addition, interacting with the detailed view highlights the corresponding region in the panoramic overview and leaves a \"trail\" of the user's path through the space. These methods of visualizing and interacting with digital video described in this paper can also be applied to collections of digital video which do not correspond to a physical space such as standard linear movies.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122247202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design and implementation of a software decoder for MPEG video bitstreams is described. The software has been ported to numerous platforms including PC''s, workstations, and mainframe computers. Performance comparisons are given for several different bitstreams and platforms including a unique metric devised to compare price/performance across different platforms (percentage of required bit rate per dollar). We also show that memory bandwidth is the primary limitation in performance of the decoder, not the computational complexity of the inverse discrete cosine transform as is commonly thought.
{"title":"Performance of a software MPEG video decoder","authors":"Ketan Patel, B. Smith, L. Rowe","doi":"10.1145/166266.166274","DOIUrl":"https://doi.org/10.1145/166266.166274","url":null,"abstract":"The design and implementation of a software decoder for MPEG video bitstreams is described. The software has been ported to numerous platforms including PC''s, workstations, and mainframe computers. Performance comparisons are given for several different bitstreams and platforms including a unique metric devised to compare price/performance across different platforms (percentage of required bit rate per dollar). We also show that memory bandwidth is the primary limitation in performance of the decoder, not the computational complexity of the inverse discrete cosine transform as is commonly thought.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126864109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We address the issue of design ofarchitectures and abstractions to implement multimediascientificmanipulationsystems, propose a model for the integra~ion ofsoftware tools into a multi-userdistributedand collaborative environment on the multimedia desktop, and briefly describe a prototype CSCWinfrastructure which we have used to implement a scientific manipulation environment. Finally, wepresent example design systems to exhibit that multimedia interfaces, incorporating text, graphics,audio and video, greatly facilitate distributed and collaborative scientific design effort.SHASTRA 1 is a distributed and collaborative geometric design and scientific manipulation environment. In this system we address the research and development of the next generation ofscientificsoftware environments where multiple users (say, a collaborative engineering design team) create, share,manipulate, analyze, simulate, and visualize complex three dimensional geometric designs over a distributed heterogeneous network of workstations and supercomputers. SHASTRA consists ofa growingset ofinteroperable tools for geometric design andscientific analysis, networked into a highly extensibleenvironment. Itprovides a unified framework for collaboration, session management, multimedia communication, and datasharing, along with a powerful numeric, symbolic and graphics substrate, enablingthe rapid prototyping and development of efficient software tools for the creation, manipulation andvisualization ofmulti-dimensionalscientific data.The design ofSHASTRA is the embodiment of a simple idea - scientific manipulation toolkits canabstractly be thought ofas objects that provide specific functionality. At the system level, SHASTRAspecifies architectural guidelines and provides communication facilities that let toolkits cooperate toutilize the functionality they offer. At the application level, it provides collaboration and multimediafacilities that let users cooperate. A marriage of the two lets us design sophisticated problem solvingenvironments.
{"title":"Collaborative multimedia scientific design in SHASTRA","authors":"V. Anupam, C. Bajaj","doi":"10.1145/166266.168458","DOIUrl":"https://doi.org/10.1145/166266.168458","url":null,"abstract":"Abstract We address the issue of design ofarchitectures and abstractions to implement multimediascientificmanipulationsystems, propose a model for the integra~ion ofsoftware tools into a multi-userdistributedand collaborative environment on the multimedia desktop, and briefly describe a prototype CSCWinfrastructure which we have used to implement a scientific manipulation environment. Finally, wepresent example design systems to exhibit that multimedia interfaces, incorporating text, graphics,audio and video, greatly facilitate distributed and collaborative scientific design effort.SHASTRA 1 is a distributed and collaborative geometric design and scientific manipulation environment. In this system we address the research and development of the next generation ofscientificsoftware environments where multiple users (say, a collaborative engineering design team) create, share,manipulate, analyze, simulate, and visualize complex three dimensional geometric designs over a distributed heterogeneous network of workstations and supercomputers. SHASTRA consists ofa growingset ofinteroperable tools for geometric design andscientific analysis, networked into a highly extensibleenvironment. Itprovides a unified framework for collaboration, session management, multimedia communication, and datasharing, along with a powerful numeric, symbolic and graphics substrate, enablingthe rapid prototyping and development of efficient software tools for the creation, manipulation andvisualization ofmulti-dimensionalscientific data.The design ofSHASTRA is the embodiment of a simple idea - scientific manipulation toolkits canabstractly be thought ofas objects that provide specific functionality. At the system level, SHASTRAspecifies architectural guidelines and provides communication facilities that let toolkits cooperate toutilize the functionality they offer. At the application level, it provides collaboration and multimediafacilities that let users cooperate. A marriage of the two lets us design sophisticated problem solvingenvironments.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131848380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yechezkal-Shimon Gutfreund, José P. Diaz-Gonzalez, R. Sasnett, V. Phuah
Distributed multimedia applications consist of a mixture of elements scattered at different locations on a network. Each element on the network has different transmission needs and each link of the network has different transmission characteristics. Just as an orchestra conductor must match the characteristics of instruments to the performance skills of the musicians-then orchestrate the piece according to their location in the orchestra and the acoustic properties of the hall. So too, a multimedia orchestration service must take A/V elements, match them to the A/V servers with appropriate responsiveness, and distribute the elements to appropriate locations on the network. To do this, we have created an orchestration service that integrates and centralizes the orchestration task thereby relieving the individual elements from being aware of how they are being composited to form a combined application and hopefully also leading to globally optimal and balanced networks. Distributed multimedia applications consist of a mixture of elements distributed over a network. For example, in figure 1, we show a collaborative media-space [3] where two scientists are conducting a joint transcontinental experiment. One scientist has a high-resolution SEM microscope, the other provides the NMR scanner. Both are producing real-time video which they are also processing in real-time. The results of the image processing is being used to drive a real-time simulation which is providing a parallel representation of the results. They will be sharing the video, but having separate simulators and renderers so that they can view different aspects of the simulation. In order to build this system, processing tasks (e.g. image processing) will have to be matched to appropriate compute servers, input/output data flows will have to be characterized, and appropriate network connections established. However, this binding cannot be static. Loading changes on the compute servers and changes in traffic flow patterns on the underlying ATM network must be constantly monitored. In response to load changes, alternative virtual connections or alternative compute servers may have to be rescheduled to maintain the QoS guarantees. From this specific example, we can create a general statement of the multimedia orchestration problem. Distributed multimedia applications consist of a set of elements. Elements can act as either
{"title":"CircusTalk: an orchestration service for distributed multimedia","authors":"Yechezkal-Shimon Gutfreund, José P. Diaz-Gonzalez, R. Sasnett, V. Phuah","doi":"10.1145/166266.168419","DOIUrl":"https://doi.org/10.1145/166266.168419","url":null,"abstract":"Distributed multimedia applications consist of a mixture of elements scattered at different locations on a network. Each element on the network has different transmission needs and each link of the network has different transmission characteristics. Just as an orchestra conductor must match the characteristics of instruments to the performance skills of the musicians-then orchestrate the piece according to their location in the orchestra and the acoustic properties of the hall. So too, a multimedia orchestration service must take A/V elements, match them to the A/V servers with appropriate responsiveness, and distribute the elements to appropriate locations on the network. To do this, we have created an orchestration service that integrates and centralizes the orchestration task thereby relieving the individual elements from being aware of how they are being composited to form a combined application and hopefully also leading to globally optimal and balanced networks. Distributed multimedia applications consist of a mixture of elements distributed over a network. For example, in figure 1, we show a collaborative media-space [3] where two scientists are conducting a joint transcontinental experiment. One scientist has a high-resolution SEM microscope, the other provides the NMR scanner. Both are producing real-time video which they are also processing in real-time. The results of the image processing is being used to drive a real-time simulation which is providing a parallel representation of the results. They will be sharing the video, but having separate simulators and renderers so that they can view different aspects of the simulation. In order to build this system, processing tasks (e.g. image processing) will have to be matched to appropriate compute servers, input/output data flows will have to be characterized, and appropriate network connections established. However, this binding cannot be static. Loading changes on the compute servers and changes in traffic flow patterns on the underlying ATM network must be constantly monitored. In response to load changes, alternative virtual connections or alternative compute servers may have to be rescheduled to maintain the QoS guarantees. From this specific example, we can create a general statement of the multimedia orchestration problem. Distributed multimedia applications consist of a set of elements. Elements can act as either","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131224894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problems in image retrieval derive from the difficulty to exactly define and interpret the image content. In most cases, image retrieval techniques are based on the data base system techniques [6], [19] or the information retrieval systems, where the image content is represented in a text form [20]. Other image retrieval techniques require that the images belong to a specific domain which must be described in advance [25]. In this paper, we present the experiments in retrieval of multimedia mineral information using AMCIRS (A Multimedia Cognitive-based Information Retrieval System) [13]. The AMCIRS query based mechanism is based on a multimedia objects content search using the vector model. Each vector is composed of text and image objects. The image objects in the vectors are image object contours, represented by polygonal approximations. The content search process is deduced to the similarity estimation between the MM query and MM index vectors. The similarity function for image objects is based on the polygon similarity estimation. The basic elements of A Multimedia Cognitive-based Information Retrieval System called AMCIRS which integrates image and text information have been described elsewhere [12], [13]. In AMCIRS, the content search process is performed using the vector model [26], where the user query and the MM information are presented by the MM query and index vectors respectively. Each vector contains text and image objects. The image objects in the vectors are image object contours, represented by polygonal approximations [24]. The information selection in AMCIRS is based on the similarity estimation between the MM query and MM index vectors. The similarity function for the image objects is deduced to the polygon similarity estimation [14]. The experimental evaluation of AMCIRS retrieval effectiveness is expressed by the recall and precision parameters. Possible advantages of multiple media retrieval with respect to the single medium retrieval are also investigated and explicitly represented by the recall-precision diagrams.
{"title":"Experiments in retrieval of mineral information","authors":"D. Cakmakov, D. Davcev","doi":"10.1145/166266.166272","DOIUrl":"https://doi.org/10.1145/166266.166272","url":null,"abstract":"The problems in image retrieval derive from the difficulty to exactly define and interpret the image content. In most cases, image retrieval techniques are based on the data base system techniques [6], [19] or the information retrieval systems, where the image content is represented in a text form [20]. Other image retrieval techniques require that the images belong to a specific domain which must be described in advance [25]. In this paper, we present the experiments in retrieval of multimedia mineral information using AMCIRS (A Multimedia Cognitive-based Information Retrieval System) [13]. The AMCIRS query based mechanism is based on a multimedia objects content search using the vector model. Each vector is composed of text and image objects. The image objects in the vectors are image object contours, represented by polygonal approximations. The content search process is deduced to the similarity estimation between the MM query and MM index vectors. The similarity function for image objects is based on the polygon similarity estimation. The basic elements of A Multimedia Cognitive-based Information Retrieval System called AMCIRS which integrates image and text information have been described elsewhere [12], [13]. In AMCIRS, the content search process is performed using the vector model [26], where the user query and the MM information are presented by the MM query and index vectors respectively. Each vector contains text and image objects. The image objects in the vectors are image object contours, represented by polygonal approximations [24]. The information selection in AMCIRS is based on the similarity estimation between the MM query and MM index vectors. The similarity function for the image objects is deduced to the polygon similarity estimation [14]. The experimental evaluation of AMCIRS retrieval effectiveness is expressed by the recall and precision parameters. Possible advantages of multiple media retrieval with respect to the single medium retrieval are also investigated and explicitly represented by the recall-precision diagrams.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123870751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital libraries of the future","authors":"E. Fox","doi":"10.1145/166266.168461","DOIUrl":"https://doi.org/10.1145/166266.168461","url":null,"abstract":"","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126852151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Little, G. Ahanger, R. Folz, J. Gibbon, F. Reeve, D. H. Schelleng, D. Venkatesh
Video-on-demand represents a key demonstrative application for enabling multimedia technology in communication, database, and interface research. This application requires solving a number of diverse technical problems including the data synchronization problem for time-dependent data delivery. In this paper we describe the general requirements of video-on-demand and introduce a system supporting content-based retrieval and playback for the structure and content of digital motion pictures. In our model we capture domain-specific information for motion pictures and provide access to individual scenes of movies through queries on a temporal database. We describe our implementation of this service using existing workstation and storage technology.
{"title":"A digital on-demand video service supporting content-based queries","authors":"T. Little, G. Ahanger, R. Folz, J. Gibbon, F. Reeve, D. H. Schelleng, D. Venkatesh","doi":"10.1145/166266.168450","DOIUrl":"https://doi.org/10.1145/166266.168450","url":null,"abstract":"Video-on-demand represents a key demonstrative application for enabling multimedia technology in communication, database, and interface research. This application requires solving a number of diverse technical problems including the data synchronization problem for time-dependent data delivery. In this paper we describe the general requirements of video-on-demand and introduce a system supporting content-based retrieval and playback for the structure and content of digital motion pictures. In our model we capture domain-specific information for motion pictures and provide access to individual scenes of movies through queries on a temporal database. We describe our implementation of this service using existing workstation and storage technology.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129226910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Altenhofen, J. Dittrich, R. Hammerschmidt, Thomas Käppner, Carsten Kruschel, Ansgar Kückes, Thomas Steinig
DeTeBerkom, Voltastr. 5, D-1000 Berlin 65, Germany. Abstract: The BERKOM Multimedia Collaboration Service is a multi-vendor workstation conferencing solution. It allows users to share applications and to participate in audiovisual conferences from their desktop. A conference manager administrates conferences and controls the assignment of roles to conference participants. A conference directory collects information about users, their roles, and the conferences they participate in. An application sharing component multiplexes the output of a window system based application (e.g., an X or Windows program) to all conference participants and, in turn, allows users to provide input. An audiovisual subsystem establishes communication links among the conference participants in an all-digital form.
{"title":"The BERKOM multimedia collaboration service","authors":"M. Altenhofen, J. Dittrich, R. Hammerschmidt, Thomas Käppner, Carsten Kruschel, Ansgar Kückes, Thomas Steinig","doi":"10.1145/166266.168460","DOIUrl":"https://doi.org/10.1145/166266.168460","url":null,"abstract":"DeTeBerkom, Voltastr. 5, D-1000 Berlin 65, Germany. Abstract: The BERKOM Multimedia Collaboration Service is a multi-vendor workstation conferencing solution. It allows users to share applications and to participate in audiovisual conferences from their desktop. A conference manager administrates conferences and controls the assignment of roles to conference participants. A conference directory collects information about users, their roles, and the conferences they participate in. An application sharing component multiplexes the output of a window system based application (e.g., an X or Windows program) to all conference participants and, in turn, allows users to provide input. An audiovisual subsystem establishes communication links among the conference participants in an all-digital form.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127489002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to make higher production speeds possible when making confectionery lollipops a method and a device are presented wherein the separation of the pieces of confectionery takes place by means of a cutting stamp making an up and down movement in the vertical plane and, each time, a piece of confectionery is transferred to a production head in which a partially inserted stick is present already and in which the piece of confectionery is prepressed, whereupon the stick is inserted further into the piece of confectionery during a rotation of the production head to a second position in which, by means of a pressing stamp moving to and fro in radial direction, the subsequent pressing of the lollipops takes place, the production head being rotated thereupon again over a distance to a position in which the lollipop is ejected from the production head.
{"title":"Where were we: making and using near-synchronous, pre-narrative video","authors":"S. Minneman, S. Harrison","doi":"10.1145/166266.166290","DOIUrl":"https://doi.org/10.1145/166266.166290","url":null,"abstract":"In order to make higher production speeds possible when making confectionery lollipops a method and a device are presented wherein the separation of the pieces of confectionery takes place by means of a cutting stamp making an up and down movement in the vertical plane and, each time, a piece of confectionery is transferred to a production head in which a partially inserted stick is present already and in which the piece of confectionery is prepressed, whereupon the stick is inserted further into the piece of confectionery during a rotation of the production head to a second position in which, by means of a pressing stamp moving to and fro in radial direction, the subsequent pressing of the lollipops takes place, the production head being rotated thereupon again over a distance to a position in which the lollipop is ejected from the production head.","PeriodicalId":412458,"journal":{"name":"MULTIMEDIA '93","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1993-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121822213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}