Gish, Herbert
Overview
Works:  9 works in 12 publications in 1 language and 15 library holdings 

Roles:  Author 
Classifications:  QC1, 
Publication Timeline
.
Most widely held works by
Herbert Gish
OPTIMUM QUANTIZATION OF RANDOM SEQUENCES by Herbert Gish(
Book
)
2 editions published in 1967 in English and held by 5 WorldCat member libraries worldwide
Whenever information is to undergo digital processing it must be given a quantized representation. The quantizers investigated are those which are optimum with respect to a mean square error criterion for stationary input sequences. They are permitted to have memory and it is shown that they belong to the class of quantized feedback systems which includes deltamodulators and predictive quantization systems (differential pulsecode modulation). Emphasized are the application of quantization to digital communication and considered are the analog to digital and digital to analog operations of the quantizer to be performed respectively by a transmitter and receiver. The analysis of the quantization system with a linear receiver is considered in depth. The results for random sequences are made applicable to the transmission of time continuous information by considering the random sequence to have been obtained by sampling a time continuous process, and by using an interpolator to reconstruct the continuous process from the quantized samples. Several examples are presented for the transmission of a GaussMarkov process. The effect of bandlimiting the continuous process before sampling is also investigated. Comparisons are made with PCM and the rate distortion function
2 editions published in 1967 in English and held by 5 WorldCat member libraries worldwide
Whenever information is to undergo digital processing it must be given a quantized representation. The quantizers investigated are those which are optimum with respect to a mean square error criterion for stationary input sequences. They are permitted to have memory and it is shown that they belong to the class of quantized feedback systems which includes deltamodulators and predictive quantization systems (differential pulsecode modulation). Emphasized are the application of quantization to digital communication and considered are the analog to digital and digital to analog operations of the quantizer to be performed respectively by a transmitter and receiver. The analysis of the quantization system with a linear receiver is considered in depth. The results for random sequences are made applicable to the transmission of time continuous information by considering the random sequence to have been obtained by sampling a time continuous process, and by using an interpolator to reconstruct the continuous process from the quantized samples. Several examples are presented for the transmission of a GaussMarkov process. The effect of bandlimiting the continuous process before sampling is also investigated. Comparisons are made with PCM and the rate distortion function
BBN PLUM: MUC4 Test Results and Analysis(
Book
)
2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide
Our midterm to longterm goals in data extraction from text for the next one to three years are to achieve much greater portability to new languages and new domains, greater robustness, and greater scalability. The novel aspect to our approach is the use of learning algorithms and probabilistic models to learn the domainspecific and language. specific knowledge necessary for a new domain and new language. Learning algorithms should contribute to scalability by making it feasible to deal with domains where it would be infeasible to invest sufficient human effort to bring a system up. Probabilistic models can contribute to robustness by allowing for words, constructions, and forms not anticipated ahead of time and by looking for the most likely interpretation in context. We began this research agenda approximately two years ago. During the last twelve months, we have focused much of our effort on porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. During the next twelve months, we anticipate porting PLUM to two or three additional domains. For any group to participate in MUC is a significant investment. To be consistent with our midterm and long term goals, we imposed the following constraints on ourselves in participating in MUC4: * We would focus our effort on semiautomatically acquired knowledge. * We would minimize effort on handcrafted knowledge, and most generally. * We would minimize MUCspecific effort. Though the three selfimposed constraints meant our overall scores on the objective evaluation were not as high as if we had focused on handtuning and handcrafting the knowledge bases, MUC4 became a vehicle for evaluating our progress on the longterm goals
2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide
Our midterm to longterm goals in data extraction from text for the next one to three years are to achieve much greater portability to new languages and new domains, greater robustness, and greater scalability. The novel aspect to our approach is the use of learning algorithms and probabilistic models to learn the domainspecific and language. specific knowledge necessary for a new domain and new language. Learning algorithms should contribute to scalability by making it feasible to deal with domains where it would be infeasible to invest sufficient human effort to bring a system up. Probabilistic models can contribute to robustness by allowing for words, constructions, and forms not anticipated ahead of time and by looking for the most likely interpretation in context. We began this research agenda approximately two years ago. During the last twelve months, we have focused much of our effort on porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. During the next twelve months, we anticipate porting PLUM to two or three additional domains. For any group to participate in MUC is a significant investment. To be consistent with our midterm and long term goals, we imposed the following constraints on ourselves in participating in MUC4: * We would focus our effort on semiautomatically acquired knowledge. * We would minimize effort on handcrafted knowledge, and most generally. * We would minimize MUCspecific effort. Though the three selfimposed constraints meant our overall scores on the objective evaluation were not as high as if we had focused on handtuning and handcrafting the knowledge bases, MUC4 became a vehicle for evaluating our progress on the longterm goals
Investigation of problems in the communication of analog data by Herbert Gish(
Book
)
2 editions published in 1969 in English and held by 2 WorldCat member libraries worldwide
The report investigates certain basic problems in the transmission of analog data. These problems deal with the ability to transmit data with prescribed accuracy over communication channels. Part of the research presented in this report treats rate distortion theory and part deals with more specialized performance bounds and also with specific methods of communication and source encoding techniques. The results on rate distortion theory will enable one to calculate bounds on information rates in situations where the theory was hitherto inapplicable. These areas include those situations where certain parts of the frequency spectrum of the transmission error are more important than others. In addition the situation where transmission errors are required to be bounded is also treated. It is also shown how rate distortion theory could be applied to problems of nonlinear estimation. (Author)
2 editions published in 1969 in English and held by 2 WorldCat member libraries worldwide
The report investigates certain basic problems in the transmission of analog data. These problems deal with the ability to transmit data with prescribed accuracy over communication channels. Part of the research presented in this report treats rate distortion theory and part deals with more specialized performance bounds and also with specific methods of communication and source encoding techniques. The results on rate distortion theory will enable one to calculate bounds on information rates in situations where the theory was hitherto inapplicable. These areas include those situations where certain parts of the frequency spectrum of the transmission error are more important than others. In addition the situation where transmission errors are required to be bounded is also treated. It is also shown how rate distortion theory could be applied to problems of nonlinear estimation. (Author)
Digital Modulation Enhancement Study(
)
1 edition published in 1973 in English and held by 1 WorldCat member library worldwide
A new method of nonlinear modulation of analog information is described which gives a substantial improvement in performance over conventional modulation techniques, such as PCM. The modulation technique which the author has called Multistream Modulation (MSMO) provides the capability of designing modulation performance characteristics which can be tailored to specific needs. The theoretical concepts presented were verified by measurements made on a digital transceiver breadboard constructed during this program. The transceiver breadboard included both the MSM and PCM systems. This enables voice experiments to be conducted with the transceiver which provided instantaneous switching from PCM to MSM. The informal voice tests which were carried out showed that the predicted and measured threshold extension over PCM was also obtained with voice communication
1 edition published in 1973 in English and held by 1 WorldCat member library worldwide
A new method of nonlinear modulation of analog information is described which gives a substantial improvement in performance over conventional modulation techniques, such as PCM. The modulation technique which the author has called Multistream Modulation (MSMO) provides the capability of designing modulation performance characteristics which can be tailored to specific needs. The theoretical concepts presented were verified by measurements made on a digital transceiver breadboard constructed during this program. The transceiver breadboard included both the MSM and PCM systems. This enables voice experiments to be conducted with the transceiver which provided instantaneous switching from PCM to MSM. The informal voice tests which were carried out showed that the predicted and measured threshold extension over PCM was also obtained with voice communication
Adaptive natural language processing(
Book
)
1 edition published in 1991 in English and held by 1 WorldCat member library worldwide
A handful of special purpose systems have been successfully deployed to extract prespecified kinds of data from text. The limitation to widespread deployment of such systems is their assumption of a large volume of handcrafted, domaindependent, and languagedependent knowledge in the form of rules. A new approach is to add automatically trainable probabilistic language models to linguistically based analysis. This offers several potential advantages: (1) Trainability by finding patterns in a large corpus, rather than handcrafting such patterns. (2) Improvability be reestimating probabilities based on a user marking correct and incorrect output on a test set. (3) More accurate selection among interpretations when more than one is produced. (4) Robustness by finding the most likely partial interpretation when no complete interpretation can be found
1 edition published in 1991 in English and held by 1 WorldCat member library worldwide
A handful of special purpose systems have been successfully deployed to extract prespecified kinds of data from text. The limitation to widespread deployment of such systems is their assumption of a large volume of handcrafted, domaindependent, and languagedependent knowledge in the form of rules. A new approach is to add automatically trainable probabilistic language models to linguistically based analysis. This offers several potential advantages: (1) Trainability by finding patterns in a large corpus, rather than handcrafting such patterns. (2) Improvability be reestimating probabilities based on a user marking correct and incorrect output on a test set. (3) More accurate selection among interpretations when more than one is produced. (4) Robustness by finding the most likely partial interpretation when no complete interpretation can be found
Analog Coding(
Book
)
1 edition published in 1971 in English and held by 1 WorldCat member library worldwide
The report is concerned with a systematic approach to the development and understanding of nonlinear modulation techniques. Rate distortion theory is used as the basis of determining the potential improvement which is possible by efficient utilization of available bandwidth. It is shown that the performance of a typical communication link could be improved by orders of magnitude through proper modulator/demodulator design. The performance of modulation techniques are evaluated and are compared with applicable bounds. The technique of minimizing the upper bound on the distortion is applied to establish what is the optimum modulator within the class of a certain structure. Further investigation in this area is recommended. Results indicate that to achieve theoretically attainable performance, the modulator must not only be nonlinear but must also make efficient use of memory. (Author)
1 edition published in 1971 in English and held by 1 WorldCat member library worldwide
The report is concerned with a systematic approach to the development and understanding of nonlinear modulation techniques. Rate distortion theory is used as the basis of determining the potential improvement which is possible by efficient utilization of available bandwidth. It is shown that the performance of a typical communication link could be improved by orders of magnitude through proper modulator/demodulator design. The performance of modulation techniques are evaluated and are compared with applicable bounds. The technique of minimizing the upper bound on the distortion is applied to establish what is the optimum modulator within the class of a certain structure. Further investigation in this area is recommended. Results indicate that to achieve theoretically attainable performance, the modulator must not only be nonlinear but must also make efficient use of memory. (Author)
A geometric approach to multiplechannel signal detection by
Douglas Cochran(
)
1 edition published in 1995 in English and held by 1 WorldCat member library worldwide
1 edition published in 1995 in English and held by 1 WorldCat member library worldwide
STATISTICAL DELTA MODULATION(
Book
)
1 edition published in 1966 in English and held by 1 WorldCat member library worldwide
The report describes the results of a study of Statistical Delta Modulation (SDM) a new mthod of digital transmission of analog information. In this method the system design is tailored to the statistical properties of the input data so as to provide analog reconstruction values with a minimum mean squared error. The method of system design is an iterative procedure in which conditional means are evaluated based upon actual input data. The report presents the theory of operation of the system and describes the results of a computer simulation in which such questions as the effects of sampling rate, channel noise, system memory, and mismatched input processes are discussed. Due to time limitations only a brief comparison was made with conventional techniques. It was found that at low sampling rates and for a second order Gaussian Markoff process, sampling rate reductions of 38% could be achieved relative to a conventional delta modulation system at the same SNR performance. (Author)
1 edition published in 1966 in English and held by 1 WorldCat member library worldwide
The report describes the results of a study of Statistical Delta Modulation (SDM) a new mthod of digital transmission of analog information. In this method the system design is tailored to the statistical properties of the input data so as to provide analog reconstruction values with a minimum mean squared error. The method of system design is an iterative procedure in which conditional means are evaluated based upon actual input data. The report presents the theory of operation of the system and describes the results of a computer simulation in which such questions as the effects of sampling rate, channel noise, system memory, and mismatched input processes are discussed. Due to time limitations only a brief comparison was made with conventional techniques. It was found that at low sampling rates and for a second order Gaussian Markoff process, sampling rate reductions of 38% could be achieved relative to a conventional delta modulation system at the same SNR performance. (Author)
An investigation of quantization techniques(
Book
)
1 edition published in 1968 in English and held by 1 WorldCat member library worldwide
The report consists of two parts, each developing a different approach to the efficient quantization of analog sources. In Part I, it is shown, under weak assumptions on the density function of a random variable, and under weak assumptions on the error criterion, that uniform quantizing yields an output entropy which asymptotically is smaller than that for any other quantizer, independent of the density function or the error criterion. The asymptotic behavior of the rate distortion function is determined for the class of nuth law loss functions and the entropy of the uniform quantizer is compared with the rate distortion function for this class of loss functions. The extension of these results to the quantizing of sequences is also given. It is shown that the discrepancy between the entropy of the uniform quantizer and the rate distortion function apparently lies with the inability of the optimal quantizing shapes to cover large dimensional spaces without overlap. A comparison of the entropies of the uniform quantizer and of the minimumalphabet quantizer is also given. In Part II, predictive quantization systems and their optimality properties are discussed. A technique for the estimation of the performance of such systems is presented which is then compared to the results of a digital simulation. (Author)
1 edition published in 1968 in English and held by 1 WorldCat member library worldwide
The report consists of two parts, each developing a different approach to the efficient quantization of analog sources. In Part I, it is shown, under weak assumptions on the density function of a random variable, and under weak assumptions on the error criterion, that uniform quantizing yields an output entropy which asymptotically is smaller than that for any other quantizer, independent of the density function or the error criterion. The asymptotic behavior of the rate distortion function is determined for the class of nuth law loss functions and the entropy of the uniform quantizer is compared with the rate distortion function for this class of loss functions. The extension of these results to the quantizing of sequences is also given. It is shown that the discrepancy between the entropy of the uniform quantizer and the rate distortion function apparently lies with the inability of the optimal quantizing shapes to cover large dimensional spaces without overlap. A comparison of the entropies of the uniform quantizer and of the minimumalphabet quantizer is also given. In Part II, predictive quantization systems and their optimality properties are discussed. A technique for the estimation of the performance of such systems is presented which is then compared to the results of a digital simulation. (Author)
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Associated Subjects
Languages