WorldCat Identities

Weischedel, R. 1949-

Overview
Works: 30 works in 64 publications in 2 languages and 419 library holdings
Genres: Excerpts 
Roles: Author, Other
Classifications: QA278.2, 519.4
Publication Timeline
.
Most widely held works by R Weischedel
Optimal subset selection: multiple regression, interdependence, and optimal network algorithms by David E Boyce( Book )

14 editions published in 1974 in English and German and held by 356 WorldCat member libraries worldwide

In the course of one's research, the expediency of meeting contractual and other externally imposed deadlines too often seems to take priority over what may be more significant research findings in the longer run. Such is the case with this volume which, despite our best intentions, has been put aside time and again since 1971 in favor of what seemed to be more urgent matters. Despite this delay, to our knowledge the principal research results and documentation presented here have not been superseded by other publications. The background of this endeavor may be of some historical interest, especially to those who agree that research is not a straightforward, mechanistic process whose outcome or even direction is known in adƯ vance. In the process of this brief recounting, we would like to express our gratitude to those individuals and organizations who facilitated and supported our efforts. We were introduced to the Beale, Kendall and Mann algorithm, the source of all our efforts, quite by chance. Professor Britton Harris suggested to me in April 1967 that I might like to attend a CEIR half-day seminar on optimal regression being given by Professor M.G. Kendall in Washington. D.C.I agreed that the topic seemed interesting and went along. Had it not been for Harris' suggestion and financial support, this work almost certainly would have never begun
A computer program for optimal regression analysis by David E Boyce( Book )

1 edition published in 1969 in English and held by 11 WorldCat member libraries worldwide

OntoNotes release 3.0( )

2 editions published in 2009 in English and held by 8 WorldCat member libraries worldwide

"The OntoNotes project is a collaborative effort between BBN Technologies, the University of Colorado, the University of Pennsylvania, and the University of Southern California's Information Sciences Institute. The goal of the project is to annotate a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, use net, broadcast, talk shows) in three languages (English, Chinese, and Arabic) with structural information (syntax and predicate argument structure) and shallow semantics (word sense linked to an ontology and coreference). OntoNotes release 3.0 is a continuation of the OntoNotes project and is supported by the Defense Advanced Research Projects Agency, GALE Program Contract No. HR0011-06-C-0022"--LDC catalogue
Toward a library of formal designs of software : final report by R Weischedel( Book )

4 editions published in 1979 in English and held by 4 WorldCat member libraries worldwide

The most promising approach to problems of large software systems is the formal specification of module interfaces, during the design phase, based on the information-hiding principle. The advantages of formal specifications are as follows: (1) Their precision, lack of ambiguity, and attention to detail should cut down on design errors. (2) They provide informal verification of a hierarchically designed system while it is being designed. (3) Special design validation teams could rigorously verify a design before it is implemented, perhaps with the aid of automated tools for some of the verification. (4) Formal specification enables rigorous specification of the requirements that an embedded computer system must conform to. (5) They combine with the information-hiding principle to enable design of systems that are much easier to modify and maintain. This research has investigated the feasibility of a library of formal specifications so that designers could build on the work of others and thereby significantly cut the upfront effort involved
Algorithms That Learn to Extract Information BBN: Description of the Sift System as Used for MUC-7( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

For MUC-7, BBN has for the first time fielded a fully-trained system for NE, TE, and TR; results are all the output of statistical language models trained on annotated data, rather than programs executing handwritten rules. Such trained systems have some significant advantages: 1. They can be easily ported to new domains by simply annotating data with semantic answers. 2. The complex interactions that make rule-based systems difficult to develop and maintain can here be learned automatically from the training data. We believe that the results in this evaluation are evidence that such trained systems, even at their current level of development, can perform roughly on a par with rules hand-tailored by experts. Since MUC-3, BBN has been steadily increasing the proportion of the information extraction process that is statistically trained. Already in MET-1, our name-finding results were the output of a fully statistical, HMM-based model, and that statistical Identifinder(trademark) model was also used for the NE task in MUC-7. For the MUC-7 TE and TR tasks, BBN developed SIFT, a new model that represents a significant further step along this path, replacing PLUM, a system requiring handwritten patterns, with SIFT, a single integrated trained model
TREC-9 Cross-Lingual Retrieval at BBN( Book )

2 editions published in 2000 in English and held by 2 WorldCat member libraries worldwide

BBN participated only in the cross-language track at TREC-9. We extended the monolingual approach of Miller et al. (1999), which uses hidden Markov models (HMM), by incorporating translation probabilities from Chinese terms to English terms. In our approach, the IR system ranks documents by the probability that a Chinese document D is relevant given an English query Q, P(D is Rel
BBN: Description of the PLUM System as Used for MUC-4( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-rafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. A central assumption of our approach is that in processing unrestricted text for data extraction, a non-trivial amount of the text will not be understood. As a result, all components of PLUM are designed to operate on partially understood input, taking advantage of information when available, and not failing when information is unavailable. We had previously performed experiments on components of the system with texts from the Wall Street Journal, however, the MUC-3 task was the first end-to-end application of PLUM. Very little hand-tuning of knowledge bases was done for MUC-4; since MUC-3, the system architecture as depicted in figure 1 has remained essentially the same. In addition to participating in MUC-4, since MUC-3 we focused on porting to new domains and a new language, and on performing various experiments designed to control recall/precision tradeoffs. To support these goals, the preprocessing component and the fragment combiner were made declarative; the semantics component was generalized to use probabilities on word senses; we expanded our treatment of reference; we enlarged the set of system parameters at all levels; and we created a new probabilistic classifier for text relevance which filters discourse events
Practical Suggestions for Writing Understandable, Correct Formal Specifications by R Weischedel( Book )

3 editions published in 1980 in English and held by 2 WorldCat member libraries worldwide

The first half of this report is tutorial. It describes the three major classes of formal specification languages and argues that understandability is critical for formal specifications. Reasons why they are difficult to understand are identified, and practical suggestions for making them more understandable follow from that. The suggestions are illustrated by the specification of a pattern-matching facility. Three practical suggestions for checking the correctness of formal specifications follow. (Author)
Optimal subset selection multiple regression, interdependence and optimal network algorithms by David E Boyce( Book )

1 edition published in 1974 in English and held by 2 WorldCat member libraries worldwide

BBN: Description of the PLUM System as Used for MUC-3( Book )

2 editions published in 1991 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized handcrafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. We have previously performed experiments on components of the system with texts from the Wall Street Journal, however, the MUC-3 task is the first end-to-end application of PLUM. MI components except parsing were developed in the last 5 months, and cannot therefore be considered fully mature. The parsing component, the MIT Fast Parser [4], originated outside BBN and has a more extensive history prior to MUC-3. A central assumption of our approach is that in processing unrestricted text for data extraction, a non-trivial amount of the text will not be understood. As a result, all components of PLUM are designed to operate on partially understood input, taking advantage of information when available, and not failing when information is unavailable
A New Approach to Text Understanding( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

This paper first briefly describes the architecture of PLUM, BBN's text processing system, and then reports on some experiments evaluating the effectiveness of the design at the component level. Three features are unusual in PLUM's architecture: a domain independent deterministic parser, processing of (the resulting) fragments at the semantic and discourse level, and probabilistic models
BBN: Description of the PLUM System as Used for MUC-6( Book )

2 editions published in 1995 in English and held by 2 WorldCat member libraries worldwide

This paper provides a quick summary of our technical approach, which has been developing since 1991 and was first fielded in MUC-3. First a quick review of what is new is provided, then a walk through of system components. Perhaps most interesting is out analysis, following the walk through, of what we learned through MUC-6 and of what directions we would take now to break the performance barriers of current information extraction technology
Computation of a subclass of inferences : presupposition and entailment by Aravind K Joshi( Book )

2 editions published in 1976 in English and held by 2 WorldCat member libraries worldwide

BBN: Description of the PLUM System as Used for MUC-5( Book )

2 editions published in 1993 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-crafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of an ARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are: * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. We began this research agenda approximately three years ago. During the past two years, we have evaluated much of our effort in porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. Three key design features distinguish PLUM: statistical language modeling, learning algorithms and partial understanding. The first key feature is the use of statistical modeling to guide processing. For the version of PLUM used in MUC-5, part of speech information was determined by using well-known Markov modeling techniques embodied in BBN's part-of-speech tagger POST [5]. We also used a correction model, AMED [3], for improving Japanese segmentation and part-of-speech tags assigned by JUMAN. For the microelectronics domain, we used a probabilistic model to help identify the role of a company in a capability (whether it is a developer, user, etc.). Statistical modeling in PLUM contributes to portability, robustness, and trainability. The second key feature is our use of learning algorithms both to obtain the knowledge bases used by PLUM's processing modules and to train the probabilistic algorithms. A third key feature is partial understanding. All components of PLUM are designed to operate on partially interpretable input
BBN PLUM: MUC-4 Test Results and Analysis( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

Our mid-term to long-term goals in data extraction from text for the next one to three years are to achieve much greater portability to new languages and new domains, greater robustness, and greater scalability. The novel aspect to our approach is the use of learning algorithms and probabilistic models to learn the domain-specific and language. specific knowledge necessary for a new domain and new language. Learning algorithms should contribute to scalability by making it feasible to deal with domains where it would be infeasible to invest sufficient human effort to bring a system up. Probabilistic models can contribute to robustness by allowing for words, constructions, and forms not anticipated ahead of time and by looking for the most likely interpretation in context. We began this research agenda approximately two years ago. During the last twelve months, we have focused much of our effort on porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. During the next twelve months, we anticipate porting PLUM to two or three additional domains. For any group to participate in MUC is a significant investment. To be consistent with our mid-term and long- term goals, we imposed the following constraints on ourselves in participating in MUC-4: * We would focus our effort on semi-automatically acquired knowledge. * We would minimize effort on handcrafted knowledge, and most generally. * We would minimize MUC-specific effort. Though the three self-imposed constraints meant our overall scores on the objective evaluation were not as high as if we had focused on handtuning and handcrafting the knowledge bases, MUC-4 became a vehicle for evaluating our progress on the long-term goals
Extracting Dynamic Evidence Networks( Book )

2 editions published in 2004 in English and held by 2 WorldCat member libraries worldwide

BBN's primary goal was to dramatically increase the accuracy of evidence extraction. Using a hybrid of statistical learning algorithms and handcrafted patterns, SERIF achieved 93% of human performance in extracting entities, events, and relations, and 96% of human performance in extracting relations given entities and events. A second performance objective was to be able to extract entities that have names at 80% of human performance. This performance was then further improved in the relation extraction work done in 2004. An additional objective was to have a prototype robust enough that it could extract evidence continually (24x7) from a daily English news feed. All objectives were achieved. BBN's SERIF system also represents a significant advance for extraction systems in architecture and implementation. The combination of general linguistic models trained on preexisting corpora with domain specific components trained for the particular task allows powerful linguistic analysis tools to be brought to bear on extracting the relations and events of a new domain. The use of propositions as an intermediate step was an important part of this strategy, encapsulating the literal meaning of the text from which the target relations could then be derived
BBN PLUM: MUC-3 Test Results and Analysis( Book )

2 editions published in 1991 in English and held by 2 WorldCat member libraries worldwide

Perhaps the most important facts about our participation in MUC-3 reflect our starting point and goals. In March, 1990, we initiated a pilot study on the feasibility and impact of applying statistical algorithms in natural language processing. The experiments were concluded in March, 1991 and lead us to believe that statistical approaches can effectively improve knowledge-based approaches [Weishedel, et al., 1991a, Weischedel, Meteer, and Schwartz, 1991]. Due to nature of that effort, we had focused on many well-defined algorithm experiments. We did not have a complete message processing system; nor was the pilot study designed to create an application system. For the Phase I evaluation, we supplied a module to New York University. At the time of the Phase I Workshop (12-14 February 1991) we decided to participate in MUC with our own entry. The Phase I Workshop provided invaluable insight into what other sites were finding successful in this particular application. On 25 February, we started an intense effort not just to be evaluated on the FBIS articles, but also to create essential components (e.g., discourse component and template generator) and to integrate all components into a complete message processing system. Although the timing of the Phase II test (6-12 May) was hardly ideal for evaluating our site's capabilities, it was ideally timed to serve as a benchmark prior to starting a four year plan for research and development in message understanding. Because of this, we were determined to try alternatives that we believed would be different than those employed by other groups, wherever time permitted. These are covered in the next section. Our results were quite positive, given these circumstances. Our max-tradeoff version achieved 45% recall and 52% precision with 22% overgenerating (See Figure 2). PLUM can be run in several modes, trading off recall versus precision and overgeneration
BBN's PLUM Probabilistic Language Understanding System( Book )

2 editions published in 1993 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-crafted linguistic knowledge In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of an ARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are: * Achieving high performance in objective evaluations, such as the Tipster evaluations. * Reducing human effort in porting the natural language algorithms to new domains and to new languages. * Providing technology that is scalable to realistic applications. We began this research agenda approximately three years ago. During the past two years, we have ported our data extraction system (PLUM) to a new language (Japanese) and to two new domains
Practical Issues in Having a Usable Library of Software Specifications by R Weischedel( Book )

4 editions published in 1981 in English and held by 2 WorldCat member libraries worldwide

Though formal specifications of software modules offer much toward the design problems of large software systems, creating formal specifications is very difficult, requiring much upfront effort. This paper examines a common idea for dealing with the high cost of software, but in the context of specification. That idea is a pool or 'library' of specifications, so that it is easy to build on the work of others. Unlike other efforts that have concentrated on technical problems in having such a library, this paper identifies and studies several common sense requirements on such a library being effectively used. Such issues are closer related to human factors than to technical problems; yet these are clearly as critical in use as technical issues. Our conclusions have arisen from two studies. In one, we wrote several module specifications in the form that might appear in a library. The module varied in complexity from a stack to the kernel of a text editor; the text editor specifications ranged in length from 9 to 28 pages including copious comments as in-line documentation. The second study compared portions of the English and formal specifications of KSOS (Ford Aerospace, 1978), the Kernelized Secure Operating System. The paper may be viewed as rather tutorial about writing correct, understandable formal specifications for others to use in their system design
A Guide to IRUS-II Application Development( Book )

1 edition published in 1989 in English and held by 1 WorldCat member library worldwide

IRUS-II is the understanding subsystem of the Janus natural language interface. IRUS-II is a natural language understanding (NLU) shell. That is, it contains domain-independent algorithms, a large grammar of English, domain-independent semantic interpretation rules, and a domain-independent discourse component. In addition, several software aids are provided to customize the system to particular application domains. These software aids output the four knowledge bases necessary for IRUS-II to correctly interpret English utterances and generate appropriate code for simultaneous access to multiple application systems. Natural language interfaces, User interfaces, Knowledge bases. (jes)
 
moreShow More Titles
fewerShow Fewer Titles
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.70 (from 0.68 for Optimal su ... to 0.99 for A Guide to ...)

Alternative Names
Weischedel, R.

Languages
English (53)

German (1)