WorldCat Identities

Palmucci, Jeff

Works: 4 works in 9 publications in 1 language and 11 library holdings
Classifications: Q335.M41,
Publication Timeline
Most widely held works by Jeff Palmucci
Experience with Acore : implementing GHC with Actors by Massachusetts Institute of Technology( Book )

4 editions published in 1990 in English and held by 6 WorldCat member libraries worldwide

This paper presents a concurrent interpreter for the programming language Guarded Horn Clauses, abbreviated GHC. GHC is a general purpose concurrent logic programming language. It has a clean, simple semantics based upon unification and choice nondeterminism. Unlike typical implementations of GHC in logic programming languages, the interpreter is implemented in the Actor language Acore. The primary motivation for this work was to probe the strengths and weaknesses of Acore as a platform for developing sophisticated programs. We chose to implement a concurrent interpreter for GHC because this large, complex application provided a rich testbed for exploring Actor programming methodology. The interpreter is a pedagogical investigation of the mapping of GHC constructs onto the Actor model. Because we opted for simplicity over efficiency, the interpreter is inefficient in both time and space
BBN: Description of the PLUM System as Used for MUC-3( Book )

2 editions published in 1991 in English and held by 2 WorldCat member libraries worldwide

Traditional approaches to the problem of extracting data from texts have emphasized handcrafted linguistic knowledge. In contrast, BBN's PLUM system (Probabilistic Language Understanding Model) was developed as part of a DARPA-funded research effort on integrating probabilistic language models with more traditional linguistic techniques. Our research and development goals are * more rapid development of new applications, * the ability to train (and re-train) systems based on user markings of correct and incorrect output, * more accurate selection among interpretations when more than one is found, and * more robust partial interpretation when no complete interpretation can be found. We have previously performed experiments on components of the system with texts from the Wall Street Journal, however, the MUC-3 task is the first end-to-end application of PLUM. MI components except parsing were developed in the last 5 months, and cannot therefore be considered fully mature. The parsing component, the MIT Fast Parser [4], originated outside BBN and has a more extensive history prior to MUC-3. A central assumption of our approach is that in processing unrestricted text for data extraction, a non-trivial amount of the text will not be understood. As a result, all components of PLUM are designed to operate on partially understood input, taking advantage of information when available, and not failing when information is unavailable
BBN PLUM: MUC-3 Test Results and Analysis( Book )

2 editions published in 1991 in English and held by 2 WorldCat member libraries worldwide

Perhaps the most important facts about our participation in MUC-3 reflect our starting point and goals. In March, 1990, we initiated a pilot study on the feasibility and impact of applying statistical algorithms in natural language processing. The experiments were concluded in March, 1991 and lead us to believe that statistical approaches can effectively improve knowledge-based approaches [Weishedel, et al., 1991a, Weischedel, Meteer, and Schwartz, 1991]. Due to nature of that effort, we had focused on many well-defined algorithm experiments. We did not have a complete message processing system; nor was the pilot study designed to create an application system. For the Phase I evaluation, we supplied a module to New York University. At the time of the Phase I Workshop (12-14 February 1991) we decided to participate in MUC with our own entry. The Phase I Workshop provided invaluable insight into what other sites were finding successful in this particular application. On 25 February, we started an intense effort not just to be evaluated on the FBIS articles, but also to create essential components (e.g., discourse component and template generator) and to integrate all components into a complete message processing system. Although the timing of the Phase II test (6-12 May) was hardly ideal for evaluating our site's capabilities, it was ideally timed to serve as a benchmark prior to starting a four year plan for research and development in message understanding. Because of this, we were determined to try alternatives that we believed would be different than those employed by other groups, wherever time permitted. These are covered in the next section. Our results were quite positive, given these circumstances. Our max-tradeoff version achieved 45% recall and 52% precision with 22% overgenerating (See Figure 2). PLUM can be run in several modes, trading off recall versus precision and overgeneration
The application of genetic algorithms to resource scheduling by Gilbert Syswerda( )

1 edition published in 1991 in English and held by 1 WorldCat member library worldwide

Audience Level
Audience Level
  Kids General Special  
Audience level: 0.83 (from 0.72 for Experience ... to 0.99 for The applic ...)