WorldCat Identities

Fox, Heidi

Overview
Works: 9 works in 16 publications in 1 language and 17 library holdings
Genres: Drama 
Roles: Author, Actor
Publication Timeline
.
Most widely held works by Heidi Fox
BBN: Description of the PLUM System as Used for MUC-5( Book )

2 editions published in 1993 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-crafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of an ARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are: * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. We began this research agenda approximately three years ago. During the past two years, we have evaluated much of our effort in porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. Three key design features distinguish PLUM: statistical language modeling, learning algorithms and partial understanding. The first key feature is the use of statistical modeling to guide processing. For the version of PLUM used in MUC-5, part of speech information was determined by using well-known Markov modeling techniques embodied in BBN's part-of-speech tagger POST [5]. We also used a correction model, AMED [3], for improving Japanese segmentation and part-of-speech tags assigned by JUMAN. For the microelectronics domain, we used a probabilistic model to help identify the role of a company in a capability (whether it is a developer, user, etc.). Statistical modeling in PLUM contributes to portability, robustness, and trainability. The second key feature is our use of learning algorithms both to obtain the knowledge bases used by PLUM's processing modules and to train the probabilistic algorithms. A third key feature is partial understanding. All components of PLUM are designed to operate on partially interpretable input
A New Approach to Text Understanding( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

This paper first briefly describes the architecture of PLUM, BBN's text processing system, and then reports on some experiments evaluating the effectiveness of the design at the component level. Three features are unusual in PLUM's architecture: a domain independent deterministic parser, processing of (the resulting) fragments at the semantic and discourse level, and probabilistic models
How to Start a Hobby in Handwriting Analysis by Heidi Fox( )

1 edition published in 2015 in English and held by 2 WorldCat member libraries worldwide

This publication will provide with valuable information on picking up a hobby in Handwriting Analysis. With in-depth information and details, you will not only have a better understanding, but gain valuable knowledge of Handwriting Analysis
BBN: Description of the PLUM System as Used for MUC-6( Book )

2 editions published in 1995 in English and held by 2 WorldCat member libraries worldwide

This paper provides a quick summary of our technical approach, which has been developing since 1991 and was first fielded in MUC-3. First a quick review of what is new is provided, then a walk through of system components. Perhaps most interesting is out analysis, following the walk through, of what we learned through MUC-6 and of what directions we would take now to break the performance barriers of current information extraction technology
BBN: Description of the PLUM System as Used for MUC-4( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-rafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. A central assumption of our approach is that in processing unrestricted text for data extraction, a non-trivial amount of the text will not be understood. As a result, all components of PLUM are designed to operate on partially understood input, taking advantage of information when available, and not failing when information is unavailable. We had previously performed experiments on components of the system with texts from the Wall Street Journal, however, the MUC-3 task was the first end-to-end application of PLUM. Very little hand-tuning of knowledge bases was done for MUC-4; since MUC-3, the system architecture as depicted in figure 1 has remained essentially the same. In addition to participating in MUC-4, since MUC-3 we focused on porting to new domains and a new language, and on performing various experiments designed to control recall/precision tradeoffs. To support these goals, the preprocessing component and the fragment combiner were made declarative; the semantics component was generalized to use probabilities on word senses; we expanded our treatment of reference; we enlarged the set of system parameters at all levels; and we created a new probabilistic classifier for text relevance which filters discourse events
Algorithms That Learn to Extract Information BBN: Description of the Sift System as Used for MUC-7( Book )

2 editions published in 1998 in English and held by 2 WorldCat member libraries worldwide

For MUC-7, BBN has for the first time fielded a fully-trained system for NE, TE, and TR; results are all the output of statistical language models trained on annotated data, rather than programs executing handwritten rules. Such trained systems have some significant advantages: 1. They can be easily ported to new domains by simply annotating data with semantic answers. 2. The complex interactions that make rule-based systems difficult to develop and maintain can here be learned automatically from the training data. We believe that the results in this evaluation are evidence that such trained systems, even at their current level of development, can perform roughly on a par with rules hand-tailored by experts. Since MUC-3, BBN has been steadily increasing the proportion of the information extraction process that is statistically trained. Already in MET-1, our name-finding results were the output of a fully statistical, HMM-based model, and that statistical Identifinder(trademark) model was also used for the NE task in MUC-7. For the MUC-7 TE and TR tasks, BBN developed SIFT, a new model that represents a significant further step along this path, replacing PLUM, a system requiring handwritten patterns, with SIFT, a single integrated trained model
BBN's PLUM Probabilistic Language Understanding System( Book )

2 editions published in 1993 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized hand-crafted linguistic knowledge In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of an ARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are: * Achieving high performance in objective evaluations, such as the Tipster evaluations. * Reducing human effort in porting the natural language algorithms to new domains and to new languages. * Providing technology that is scalable to realistic applications. We began this research agenda approximately three years ago. During the past two years, we have ported our data extraction system (PLUM) to a new language (Japanese) and to two new domains
BBN PLUM: MUC-4 Test Results and Analysis( Book )

2 editions published in 1992 in English and held by 2 WorldCat member libraries worldwide

Our mid-term to long-term goals in data extraction from text for the next one to three years are to achieve much greater portability to new languages and new domains, greater robustness, and greater scalability. The novel aspect to our approach is the use of learning algorithms and probabilistic models to learn the domain-specific and language. specific knowledge necessary for a new domain and new language. Learning algorithms should contribute to scalability by making it feasible to deal with domains where it would be infeasible to invest sufficient human effort to bring a system up. Probabilistic models can contribute to robustness by allowing for words, constructions, and forms not anticipated ahead of time and by looking for the most likely interpretation in context. We began this research agenda approximately two years ago. During the last twelve months, we have focused much of our effort on porting our data extraction system (PLUM) to a new language (Japanese) and to two new domains. During the next twelve months, we anticipate porting PLUM to two or three additional domains. For any group to participate in MUC is a significant investment. To be consistent with our mid-term and long- term goals, we imposed the following constraints on ourselves in participating in MUC-4: * We would focus our effort on semi-automatically acquired knowledge. * We would minimize effort on handcrafted knowledge, and most generally. * We would minimize MUC-specific effort. Though the three self-imposed constraints meant our overall scores on the objective evaluation were not as high as if we had focused on handtuning and handcrafting the knowledge bases, MUC-4 became a vehicle for evaluating our progress on the long-term goals
The unborn( Visual )

1 edition published in 1965 in English and held by 1 WorldCat member library worldwide

The spirit of an unborn human being, afraid of life on earth among the living, asks several children whether they are happy to be alive
 
Audience Level
0
Audience Level
1
  Kids General Special  
Audience level: 0.92 (from 0.77 for How to Sta ... to 0.95 for A New Appr ...)

Associated Subjects
Languages
English (16)