Pub Date : 1983-12-01DOI: 10.1002/J.1538-7305.1983.TB03461.X
P. B. Grimado
Telephone Building Energy Consumption and Control (TELBECC) program has been developed to accurately and efficiently analyze environmental control and energy use in telephone company buildings. The program simulates various operational plans to determine the relative energy and cost savings. By analyzing the operation of the heating, ventilation, and air conditioning system as it regulates a changing environment, TELBECC calculates the heating and cooling load, dry-bulb temperature, and relative humidity in the building. The user specifies the building's dry-bulb temperature limits, which are the control variables for the program analysis. The simplified computational procedure of the program incorporates a recursive scheme using time series to perform the necessary calculations. The results of the computations can be obtained for different periods: the quarter hour, hour, day, or month. Energy consumption and control in several equipment buildings located in three different geographical areas have been analyzed by TELBECC. Analysis and comparison of the resulting data demonstrate the advantages of the program.
{"title":"TELBECC — A computational method and computer program for analyzing Telephone Building Energy Consumption and Control","authors":"P. B. Grimado","doi":"10.1002/J.1538-7305.1983.TB03461.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03461.X","url":null,"abstract":"Telephone Building Energy Consumption and Control (TELBECC) program has been developed to accurately and efficiently analyze environmental control and energy use in telephone company buildings. The program simulates various operational plans to determine the relative energy and cost savings. By analyzing the operation of the heating, ventilation, and air conditioning system as it regulates a changing environment, TELBECC calculates the heating and cooling load, dry-bulb temperature, and relative humidity in the building. The user specifies the building's dry-bulb temperature limits, which are the control variables for the program analysis. The simplified computational procedure of the program incorporates a recursive scheme using time series to perform the necessary calculations. The results of the computations can be obtained for different periods: the quarter hour, hour, day, or month. Energy consumption and control in several equipment buildings located in three different geographical areas have been analyzed by TELBECC. Analysis and comparison of the resulting data demonstrate the advantages of the program.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127320825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-12-01DOI: 10.1002/J.1538-7305.1983.TB03462.X
M. Honig
This paper derives fixed-order recursive Least-Squares (LS) algorithms that can be used in system identification and adaptive filtering applications such as spectral estimation, and speech analysis and synthesis. These algorithms solve the sliding-window and growing-memory covariance LS estimation problems, and require less computation than both unnormalized and normalized versions of the computationally efficient order-recursive (lattice) covariance algorithms previously presented. The geometric or Hilbert space approach, originally introduced by Lee and Morf to solve the prewindowed LS problem, is used to systematically generate least-squares recursions. We show that combining subsets of these recursions results in prewindowed LS lattice and fixed-order (transversal) algorithms, and in sliding-window and growing-memory covariance lattice and transversal algorithms. The paper discusses both least-squares prediction and joint-process estimation.
{"title":"Recursive fixed-order covariance Least-Squares algorithms","authors":"M. Honig","doi":"10.1002/J.1538-7305.1983.TB03462.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03462.X","url":null,"abstract":"This paper derives fixed-order recursive Least-Squares (LS) algorithms that can be used in system identification and adaptive filtering applications such as spectral estimation, and speech analysis and synthesis. These algorithms solve the sliding-window and growing-memory covariance LS estimation problems, and require less computation than both unnormalized and normalized versions of the computationally efficient order-recursive (lattice) covariance algorithms previously presented. The geometric or Hilbert space approach, originally introduced by Lee and Morf to solve the prewindowed LS problem, is used to systematically generate least-squares recursions. We show that combining subsets of these recursions results in prewindowed LS lattice and fixed-order (transversal) algorithms, and in sliding-window and growing-memory covariance lattice and transversal algorithms. The paper discusses both least-squares prediction and joint-process estimation.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129090745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe the off-line quality control method and its application in optimizing the process for forming contact windows in 3.5-μm complementary metal-oxide semiconductor circuits. The offline quality control method is a systematic method of optimizing production processes and product designs. It is widely used in Japan to produce high-quality products at low cost. The key steps of off-line quality control are: (i) Identify important process factors that can be manipulated and their potential working levels; (ii) perform fractional factorial experiments on the process using orthogonal array designs; (iii) analyze the resulting data to determine the optimum operating levels of the factors (both the process mean and the process variance are considered in this analysis; (iv) conduct an additional experiment to verify that the new factor levels indeed improve the quality control.
{"title":"Off-line quality control in integrated circuit fabrication using experimental design","authors":"M. Phadke, R. N. Kackar, D. Speeney, M. Grieco","doi":"10.1117/12.940434","DOIUrl":"https://doi.org/10.1117/12.940434","url":null,"abstract":"In this paper we describe the off-line quality control method and its application in optimizing the process for forming contact windows in 3.5-μm complementary metal-oxide semiconductor circuits. The offline quality control method is a systematic method of optimizing production processes and product designs. It is widely used in Japan to produce high-quality products at low cost. The key steps of off-line quality control are: (i) Identify important process factors that can be manipulated and their potential working levels; (ii) perform fractional factorial experiments on the process using orthogonal array designs; (iii) analyze the resulting data to determine the optimum operating levels of the factors (both the process mean and the process variance are considered in this analysis; (iv) conduct an additional experiment to verify that the new factor levels indeed improve the quality control.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03116.X
J. M. Andrews
A mask set, incorporating a group of seven test chips, has been designed for fine-line process development and process control. Although six lithographic levels are available, the masks are generally intended to be used only in subsets of two or three levels to minimize the delay encountered in obtaining electrical test results for whichever processes require investigation. The mask levels serve a variety of purposes for special process development experiments. Available structures include: metal-oxide-semiconductor capacitors, p-n junctions, guarded and unguarded Schottky barrier diodes, ohmic contacts, van der Pauw patterns, insulated gate field-effect transistors, gated diodes, resistors for sheet resistance and linewidth variations, and tapped electromigration test strings. It is not anticipated that a process engineer should ever need more than a maximum of four levels to achieve an appropriate experimental structure for process development. It is not the purpose of these masks to establish fine-line design rules. The masks are intended to be used primarily with standard photolithographic processing, and most device structures have been designed to tolerate up to 5 μm in misalignment errors. However, certain selected features have been coded in a diminishing sequence to a minimum of 1.0 μm for special fine-line investigations. A salient feature of this mask system is the option to interleave rapid turnaround photolithographic steps with fine-line X-ray patterning; therefore, some mask levels have been reissued for X-ray lithography.
{"title":"A lithographic mask system for MOS fine-line process development","authors":"J. M. Andrews","doi":"10.1002/J.1538-7305.1983.TB03116.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03116.X","url":null,"abstract":"A mask set, incorporating a group of seven test chips, has been designed for fine-line process development and process control. Although six lithographic levels are available, the masks are generally intended to be used only in subsets of two or three levels to minimize the delay encountered in obtaining electrical test results for whichever processes require investigation. The mask levels serve a variety of purposes for special process development experiments. Available structures include: metal-oxide-semiconductor capacitors, p-n junctions, guarded and unguarded Schottky barrier diodes, ohmic contacts, van der Pauw patterns, insulated gate field-effect transistors, gated diodes, resistors for sheet resistance and linewidth variations, and tapped electromigration test strings. It is not anticipated that a process engineer should ever need more than a maximum of four levels to achieve an appropriate experimental structure for process development. It is not the purpose of these masks to establish fine-line design rules. The masks are intended to be used primarily with standard photolithographic processing, and most device structures have been designed to tolerate up to 5 μm in misalignment errors. However, certain selected features have been coded in a diminishing sequence to a minimum of 1.0 μm for special fine-line investigations. A salient feature of this mask system is the option to interleave rapid turnaround photolithographic steps with fine-line X-ray patterning; therefore, some mask levels have been reissued for X-ray lithography.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130022107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03114.X
S. Levinson, L. Rabiner, M. Sondhi
In this paper we present several of the salient theoretical and practical issues associated with modeling a speech signal as a probabilistic function of a (hidden) Markov chain. First we give a concise review of the literature with emphasis on the Baum-Welch algorithm. This is followed by a detailed discussion of three issues not treated in the literature: alternatives to the Baum-Welch algorithm; critical facets of the implementation of the algorithms, with emphasis on their numerical properties; and behavior of Markov models on certain artificial but realistic problems. Special attention is given to a particular class of Markov models, which we call “left-to-right” models. This class of models is especially appropriate for isolated word recognition. The results of the application of these methods to an isolated word, speaker-independent speech recognition experiment are given in a companion paper.
{"title":"An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition","authors":"S. Levinson, L. Rabiner, M. Sondhi","doi":"10.1002/J.1538-7305.1983.TB03114.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03114.X","url":null,"abstract":"In this paper we present several of the salient theoretical and practical issues associated with modeling a speech signal as a probabilistic function of a (hidden) Markov chain. First we give a concise review of the literature with emphasis on the Baum-Welch algorithm. This is followed by a detailed discussion of three issues not treated in the literature: alternatives to the Baum-Welch algorithm; critical facets of the implementation of the algorithms, with emphasis on their numerical properties; and behavior of Markov models on certain artificial but realistic problems. Special attention is given to a particular class of Markov models, which we call “left-to-right” models. This class of models is especially appropriate for isolated word recognition. The results of the application of these methods to an isolated word, speaker-independent speech recognition experiment are given in a companion paper.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123462463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03117.X
J. C. Candy, O. Benjamin
In this paper we describe a circuit that accepts pulse code modulated signals sampled at about 8 kHz and resamples them at any desired rate up to 512 kHz. When the sampling satisfies Nyquist's criterion, the distortion introduced is at least 35 dB below the signal level. The circuit uses a digital low-pass filter to interpolate sample values, and it may be integrated as about 2500 gates on a 5 mm2 chip.
{"title":"A circuit that changes the word rate of pulse code modulated signals","authors":"J. C. Candy, O. Benjamin","doi":"10.1002/J.1538-7305.1983.TB03117.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03117.X","url":null,"abstract":"In this paper we describe a circuit that accepts pulse code modulated signals sampled at about 8 kHz and resamples them at any desired rate up to 512 kHz. When the sampling satisfies Nyquist's criterion, the distortion introduced is at least 35 dB below the signal level. The circuit uses a digital low-pass filter to interpolate sample values, and it may be integrated as about 2500 gates on a 5 mm2 chip.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129955359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03113.X
A. Saleh, J. Salz
High-frequency power amplifiers operate most efficiently at saturation, i.e., in the nonlinear range of their input/output characteristics. This phenomenon has traditionally dictated the use of constant envelope modulation methods for data transmission, resulting in circular signal constellations. This approach has inherently limited the admissible data rates in digital radio. In this paper we present a method for solving this problem without sacrificing amplifier power efficiency. We describe and analyze an adaptive linearizer that can automatically compensate for amplifier nonlinearity and thus make it possible to transmit multilevel quadrature amplitude modulated signals without incurring intolerable constellation distortions. The linearizer utilizes a real-time, data-directed, recursive algorithm for predistorting the signal constellation. Our analysis and computer simulations indicate that the algorithm is robust and converges rapidly from a blind start. Furthermore, the signal constellation and the average transmitted power can both be changed through software.
{"title":"Adaptive linearization of power amplifiers in digital radio systems","authors":"A. Saleh, J. Salz","doi":"10.1002/J.1538-7305.1983.TB03113.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03113.X","url":null,"abstract":"High-frequency power amplifiers operate most efficiently at saturation, i.e., in the nonlinear range of their input/output characteristics. This phenomenon has traditionally dictated the use of constant envelope modulation methods for data transmission, resulting in circular signal constellations. This approach has inherently limited the admissible data rates in digital radio. In this paper we present a method for solving this problem without sacrificing amplifier power efficiency. We describe and analyze an adaptive linearizer that can automatically compensate for amplifier nonlinearity and thus make it possible to transmit multilevel quadrature amplitude modulated signals without incurring intolerable constellation distortions. The linearizer utilizes a real-time, data-directed, recursive algorithm for predistorting the signal constellation. Our analysis and computer simulations indicate that the algorithm is robust and converges rapidly from a blind start. Furthermore, the signal constellation and the average transmitted power can both be changed through software.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130783302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03115.X
L. Rabiner, S. Levinson, M. Sondhi
In this paper we present an approach to speaker-independent, isolated word recognition in which the well-known techniques of vector quantization and hidden Markov modeling are combined with a linear predictive coding analysis front end. This is done in the framework of a standard statistical pattern recognition model. Both the vector quantizer and the hidden Markov models need to be trained for the vocabulary being recognized. Such training results in a distinct hidden Markov model for each word of the vocabulary. Classification consists of computing the probability of generating the test word with each word model and choosing the word model that gives the highest probability. There are several factors, in both the vector quantizer and the hidden Markov modeling, that affect the performance of the overall word recognition system, including the size of the vector quantizer, the structure of the hidden Markov model, the ways of handling insufficient training data, etc. The effects, on recognition accuracy, of many of these factors are discussed in this paper. The entire recognizer (training and testing) has been evaluated on a 10-word digits vocabulary. For training, a set of 100 talkers spoke each of the digits one time. For testing, an independent set of 100 tokens of each of the digits was obtained. The overall recognition accuracy was found to be 96.5 percent for the 100-talker test set. These results are comparable to those obtained in earlier work, using a dynamic time-warping recognition algorithm with multiple templates per digit. It is also shown that the computation and storage requirements of the new recognizer were an order of magnitude less than that required for a conventional pattern recognition system using linear prediction with dynamic time warping.
{"title":"On the application of vector quantization and hidden Markov models to speaker-independent, isolated word recognition","authors":"L. Rabiner, S. Levinson, M. Sondhi","doi":"10.1002/J.1538-7305.1983.TB03115.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03115.X","url":null,"abstract":"In this paper we present an approach to speaker-independent, isolated word recognition in which the well-known techniques of vector quantization and hidden Markov modeling are combined with a linear predictive coding analysis front end. This is done in the framework of a standard statistical pattern recognition model. Both the vector quantizer and the hidden Markov models need to be trained for the vocabulary being recognized. Such training results in a distinct hidden Markov model for each word of the vocabulary. Classification consists of computing the probability of generating the test word with each word model and choosing the word model that gives the highest probability. There are several factors, in both the vector quantizer and the hidden Markov modeling, that affect the performance of the overall word recognition system, including the size of the vector quantizer, the structure of the hidden Markov model, the ways of handling insufficient training data, etc. The effects, on recognition accuracy, of many of these factors are discussed in this paper. The entire recognizer (training and testing) has been evaluated on a 10-word digits vocabulary. For training, a set of 100 talkers spoke each of the digits one time. For testing, an independent set of 100 tokens of each of the digits was obtained. The overall recognition accuracy was found to be 96.5 percent for the 100-talker test set. These results are comparable to those obtained in earlier work, using a dynamic time-warping recognition algorithm with multiple templates per digit. It is also shown that the computation and storage requirements of the new recognizer were an order of magnitude less than that required for a conventional pattern recognition system using linear prediction with dynamic time warping.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124667275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-04-01DOI: 10.1002/J.1538-7305.1983.TB03112.X
G. Brockway, M. R. Santana
In this paper, added loss during temperature cycling in a given ribboned fiber is shown to be caused by thermally induced axial compressive strain imparted to the fiber. A microbending-sensitivity parameter δ is introduced which reduces all loss-strain curves corresponding to different fibers to one characteristic master curve. Thermoviscoelasticity theory is used to calculate the time- and temperature-dependent compressive strain imparted to a ribboned fiber during a standard environmental cycle. Combining these analytical results with environmental data, the functional relationship between fiber-compressive strain and the added loss for a fiber of any given δ in an Adhesive-Sandwich Ribbon (ASR) with Urethane-Acrylate (UA) coated fibers has been determined. Using this analysis, the added loss for a UA ASR can now be predicted for any environmental cycle. The critical material properties that dominate the environmental performance of ASRs are the tape shrinkback at elevated temperatures and the product αEA of the coefficient a of thermal expansion, the time- and temperature-dependent relaxation modulus E, and the area A of the coating.
{"title":"Analysis of thermally induced loss in fiber-optic ribbons","authors":"G. Brockway, M. R. Santana","doi":"10.1002/J.1538-7305.1983.TB03112.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB03112.X","url":null,"abstract":"In this paper, added loss during temperature cycling in a given ribboned fiber is shown to be caused by thermally induced axial compressive strain imparted to the fiber. A microbending-sensitivity parameter δ is introduced which reduces all loss-strain curves corresponding to different fibers to one characteristic master curve. Thermoviscoelasticity theory is used to calculate the time- and temperature-dependent compressive strain imparted to a ribboned fiber during a standard environmental cycle. Combining these analytical results with environmental data, the functional relationship between fiber-compressive strain and the added loss for a fiber of any given δ in an Adhesive-Sandwich Ribbon (ASR) with Urethane-Acrylate (UA) coated fibers has been determined. Using this analysis, the added loss for a UA ASR can now be predicted for any environmental cycle. The critical material properties that dominate the environmental performance of ASRs are the tape shrinkback at elevated temperatures and the product αEA of the coefficient a of thermal expansion, the time- and temperature-dependent relaxation modulus E, and the area A of the coating.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129181723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1983-03-01DOI: 10.1002/J.1538-7305.1983.TB04425.X
P. Bastien, B. R. Wycherley
This article describes the issues involved in operator services planning and the structure of the computer tools developed to address these issues. The approach to long-range planning of operator services networks is very flexible, and the availability of the Traffic Service Position System No. 1B further enhances this flexibility. This planning effort is a complex process that has been automated by computer tools that are both accurate and user-friendly.
{"title":"Traffic Service Position System No. 1B: Long-range planning tools","authors":"P. Bastien, B. R. Wycherley","doi":"10.1002/J.1538-7305.1983.TB04425.X","DOIUrl":"https://doi.org/10.1002/J.1538-7305.1983.TB04425.X","url":null,"abstract":"This article describes the issues involved in operator services planning and the structure of the computer tools developed to address these issues. The approach to long-range planning of operator services networks is very flexible, and the availability of the Traffic Service Position System No. 1B further enhances this flexibility. This planning effort is a complex process that has been automated by computer tools that are both accurate and user-friendly.","PeriodicalId":447574,"journal":{"name":"The Bell System Technical Journal","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1983-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125147638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}