Rowe, Neil C.
Overview
Works:  79 works in 146 publications in 2 languages and 987 library holdings 

Genres:  Conference papers and proceedings 
Roles:  Author, Editor 
Classifications:  QA76.73.P76, 005.133 
Publication Timeline
.
Most widely held works by
Neil C Rowe
Artificial intelligence through Prolog by
Neil C Rowe(
Book
)
20 editions published between 1983 and 1999 in English and held by 488 WorldCat member libraries worldwide
20 editions published between 1983 and 1999 in English and held by 488 WorldCat member libraries worldwide
Digital libraries 99 : the Fourth ACM Conference on Digital Libraries, August 1114, 1999, Berkeley, CA by
1999, Berkeley, Calif.) Conference on Digital Libraries (4(
Book
)
7 editions published in 1999 in English and held by 80 WorldCat member libraries worldwide
7 editions published in 1999 in English and held by 80 WorldCat member libraries worldwide
Introduction to cyberdeception by
Neil C Rowe(
Book
)
8 editions published in 2016 in English and German and held by 20 WorldCat member libraries worldwide
This book is an introduction to both offensive and defensive techniques of cyberdeception. Unlike most books on cyberdeception, this book focuses on methods rather than detection. It treats cyberdeception techniques that are current, novel, and practical, and that go well beyond traditional honeypots. It contains features friendly for classroom use: (1) minimal use of programming details and mathematics, (2) modular chapters that can be covered in many orders, (3) exercises with each chapter, and (4) an extensive reference list. Cyberattacks have grown serious enough that understanding and using deception is essential to safe operation in cyberspace. The deception techniques covered are impersonation, delays, fakes, camouflage, false excuses, and social engineering. Special attention is devoted to cyberdeception in industrial control systems and within operating systems. This material is supported by a detailed discussion of how to plan deceptions and calculate their detectability and effectiveness. Some of the chapters provide further technical details of specific deception techniques and their application. Cyberdeception can be conducted ethically and efficiently when necessary by following a few basic principles. This book is intended for advanced undergraduate students and graduate students, as well as computer professionals learning on their own. It will be especially useful for anyone who helps run important and essential computer systems such as criticalinfrastructure and military systems
8 editions published in 2016 in English and German and held by 20 WorldCat member libraries worldwide
This book is an introduction to both offensive and defensive techniques of cyberdeception. Unlike most books on cyberdeception, this book focuses on methods rather than detection. It treats cyberdeception techniques that are current, novel, and practical, and that go well beyond traditional honeypots. It contains features friendly for classroom use: (1) minimal use of programming details and mathematics, (2) modular chapters that can be covered in many orders, (3) exercises with each chapter, and (4) an extensive reference list. Cyberattacks have grown serious enough that understanding and using deception is essential to safe operation in cyberspace. The deception techniques covered are impersonation, delays, fakes, camouflage, false excuses, and social engineering. Special attention is devoted to cyberdeception in industrial control systems and within operating systems. This material is supported by a detailed discussion of how to plan deceptions and calculate their detectability and effectiveness. Some of the chapters provide further technical details of specific deception techniques and their application. Cyberdeception can be conducted ethically and efficiently when necessary by following a few basic principles. This book is intended for advanced undergraduate students and graduate students, as well as computer professionals learning on their own. It will be especially useful for anyone who helps run important and essential computer systems such as criticalinfrastructure and military systems
Rulebased statistical calculations on a database abstract by
Neil C Rowe(
Book
)
9 editions published between 1982 and 1983 in English and Undetermined and held by 18 WorldCat member libraries worldwide
The size of data sets subjected to statistical analysis is increasing as computer technology develops. Quick estimates of statistics rather than exact values are becoming increasingly important to analysts. The authors proposes a new technique for estimating statistics on a database, a topdown alternative to the bottomup method of sampling. This approach precomputes a set of generalpurpose statistics on the database, a database abstract, and then uses a large set of inference rules to make bounded estimates of other, arbitrary statistics requested by users. The inference rules form a new example of an artificialintelligence expert system. There are several important advantages of this approach over sampling methods, as is demonstrated in part by detailed experimental comparisons for two quite different data bases. (Author)
9 editions published between 1982 and 1983 in English and Undetermined and held by 18 WorldCat member libraries worldwide
The size of data sets subjected to statistical analysis is increasing as computer technology develops. Quick estimates of statistics rather than exact values are becoming increasingly important to analysts. The authors proposes a new technique for estimating statistics on a database, a topdown alternative to the bottomup method of sampling. This approach precomputes a set of generalpurpose statistics on the database, a database abstract, and then uses a large set of inference rules to make bounded estimates of other, arbitrary statistics requested by users. The inference rules form a new example of an artificialintelligence expert system. There are several important advantages of this approach over sampling methods, as is demonstrated in part by detailed experimental comparisons for two quite different data bases. (Author)
Three papers on rulebased estimation of statistics on databases by
Neil C Rowe(
Book
)
6 editions published between 1982 and 1983 in English and held by 8 WorldCat member libraries worldwide
This report contains three papers on rulebased estimation of statistics on a database: an overview, followed by two more specialized papers. The first, Rulebased Statistical Calculations on a Database Abstract, is addressed to a general database audience. The second, Inheritance of Statistical Properties, is addressed to an artificial intelligence audience, the third, Diophantine Compromise of a Statistical Database, is addressed to an audience of database theorists
6 editions published between 1982 and 1983 in English and held by 8 WorldCat member libraries worldwide
This report contains three papers on rulebased estimation of statistics on a database: an overview, followed by two more specialized papers. The first, Rulebased Statistical Calculations on a Database Abstract, is addressed to a general database audience. The second, Inheritance of Statistical Properties, is addressed to an artificial intelligence audience, the third, Diophantine Compromise of a Statistical Database, is addressed to an audience of database theorists
Modelling degrees of item interest for a general database query system by
Neil C Rowe(
Book
)
5 editions published in 1982 in English and held by 7 WorldCat member libraries worldwide
Many databases support decisionmaking. Often this means choices between alternatives according to partly subjective or conflicting criteria. Database query languages are generally designed for precise, logical specification of the data of interest, and tend to be awkward in the aforementioned circumstances. Information retrieval research suggests several solutions, but there are obstacles to generalizing these ideas to most databases. To address this problem the authors propose a methodology for automatically deriving and monitoring degrees of interest among alternatives for a user of a database system. This includes a decision theory model of the value of information to the user, and inference mechanisms, based in part on ideas from artificial intelligence, that can tune the model to observed user behavior. This theory has important applications to improving efficiency and cooperativeness of the interface between a decisionmaker and a database system. (Author)
5 editions published in 1982 in English and held by 7 WorldCat member libraries worldwide
Many databases support decisionmaking. Often this means choices between alternatives according to partly subjective or conflicting criteria. Database query languages are generally designed for precise, logical specification of the data of interest, and tend to be awkward in the aforementioned circumstances. Information retrieval research suggests several solutions, but there are obstacles to generalizing these ideas to most databases. To address this problem the authors propose a methodology for automatically deriving and monitoring degrees of interest among alternatives for a user of a database system. This includes a decision theory model of the value of information to the user, and inference mechanisms, based in part on ideas from artificial intelligence, that can tune the model to observed user behavior. This theory has important applications to improving efficiency and cooperativeness of the interface between a decisionmaker and a database system. (Author)
Some links between turtle geometry and analytic geometry by
Neil C Rowe(
)
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/somelinksbetween00rowe
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/somelinksbetween00rowe
Topdown statistical estimation on a database by
Neil C Rowe(
)
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
Prepared for: Chief of Naval Research
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
Prepared for: Chief of Naval Research
Exploiting captions for access to multimedia databases by
Neil C Rowe(
)
1 edition published in 1991 in English and held by 2 WorldCat member libraries worldwide
Descriptive captions help organize noncompetitive media. But automated use of captions in retrieval from computerized multimedia databases has not been much examined because it would seem to require significant natural language processing. We argue that captions can be naturally expressed in a restricted language whose interpretations is easier than general natural language understanding. We describe a multimedia database system that stores interpreted captions in predicate calculus for each media datum; it then interprets restrictedlanguage queries, and finds matching media objects
1 edition published in 1991 in English and held by 2 WorldCat member libraries worldwide
Descriptive captions help organize noncompetitive media. But automated use of captions in retrieval from computerized multimedia databases has not been much examined because it would seem to require significant natural language processing. We argue that captions can be naturally expressed in a restricted language whose interpretations is easier than general natural language understanding. We describe a multimedia database system that stores interpreted captions in predicate calculus for each media datum; it then interprets restrictedlanguage queries, and finds matching media objects
Antisampling for estimation: an overview by
Neil C Rowe(
)
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
We survey a new way to get quick estimates of the values of simple statistics (like count, mean, standard deviation, maximum, median, and mode frequency) on a large data set. This approach is a comprehensive attempt (apparently the first) to estimate statistics without any sampling, by reasoning about various sets containing a population interest. Our antisampling techniques have connections to those of sampling (and have duals in many cases), but they have different advantages and disadvantages, making antisampling sometimes preferable to sampling, sometimes not. In particular, they can only be efficient when data is in a computer, and they exploit computer science ideas such as production systems and database theory. Antisampling also requires the overhead of construction of an auxiliary structure, a database abstract . Tests on sample data show similar or better performance than simple random sampling. We also discuss more complex methods of sampling and their disadvantages
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
We survey a new way to get quick estimates of the values of simple statistics (like count, mean, standard deviation, maximum, median, and mode frequency) on a large data set. This approach is a comprehensive attempt (apparently the first) to estimate statistics without any sampling, by reasoning about various sets containing a population interest. Our antisampling techniques have connections to those of sampling (and have duals in many cases), but they have different advantages and disadvantages, making antisampling sometimes preferable to sampling, sometimes not. In particular, they can only be efficient when data is in a computer, and they exploit computer science ideas such as production systems and database theory. Antisampling also requires the overhead of construction of an auxiliary structure, a database abstract . Tests on sample data show similar or better performance than simple random sampling. We also discuss more complex methods of sampling and their disadvantages
Aiding teachers in constructing virtualreality tutors by
Neil C Rowe(
)
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/aidingteachersin00rowe
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/aidingteachersin00rowe
Absolute bounds on the mean and standard deviation of transformed data for constantderivative transformations by
Neil C Rowe(
)
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
We investigate absolute bounds (or inequalities) on the mean and standard deviation of transformed data values, given only a few statistics on the original set of data values. Our work applies primarily to transformation functions whose derivatives are constantsign for a positive range (e.g. logarithm, antilog, square root, and reciprocal). With such functions we can often get reasonably tight absolute bounds, so that distributional assumptions about the data needed for confidence intervals can be eliminated. We investigate a variety of methods of obtaining such bounds, first examining bounding curves which are straight lines, them those that are quadratic polynomials. While the problem of finding the best quadratic bound is an optimization problem with no closedform solution, we display a variety of closedform quadratic bounds which can come close to the optimal solution. We emphasize what can be done with prior knowledge of the mean and standard deviation of the untransformed data values, but do address some other statistics too. (Author)
1 edition published in 1984 in Undetermined and held by 2 WorldCat member libraries worldwide
We investigate absolute bounds (or inequalities) on the mean and standard deviation of transformed data values, given only a few statistics on the original set of data values. Our work applies primarily to transformation functions whose derivatives are constantsign for a positive range (e.g. logarithm, antilog, square root, and reciprocal). With such functions we can often get reasonably tight absolute bounds, so that distributional assumptions about the data needed for confidence intervals can be eliminated. We investigate a variety of methods of obtaining such bounds, first examining bounding curves which are straight lines, them those that are quadratic polynomials. While the problem of finding the best quadratic bound is an optimization problem with no closedform solution, we display a variety of closedform quadratic bounds which can come close to the optimal solution. We emphasize what can be done with prior knowledge of the mean and standard deviation of the untransformed data values, but do address some other statistics too. (Author)
Exploiting capability constraints to solve global, two dimensional path planning problems by R. F Richbourg(
Book
)
3 editions published in 1986 in English and held by 2 WorldCat member libraries worldwide
Mobile autonomous vehicles require the capability of planning routes over ranges that are too great to be characterized by local sensor systems. Completion of this task requires some form of map data. Much work has been done concerning planning paths through local areas, those which can be scanned by onboard sensor systems. However, planning paths based on long range map data is a very different problem. Extant solution techniques require the search of discrete, node and link representations which characterize continuous, two dimensional problem environments. The authors assume the availability of topographic data organized into regions of homogenous traversal cost. Given this, they present a solution technique for the long range planning problem which relies on a Snell's Law heuristic to limit a graph search for the optimal solution
3 editions published in 1986 in English and held by 2 WorldCat member libraries worldwide
Mobile autonomous vehicles require the capability of planning routes over ranges that are too great to be characterized by local sensor systems. Completion of this task requires some form of map data. Much work has been done concerning planning paths through local areas, those which can be scanned by onboard sensor systems. However, planning paths based on long range map data is a very different problem. Extant solution techniques require the search of discrete, node and link representations which characterize continuous, two dimensional problem environments. The authors assume the availability of topographic data organized into regions of homogenous traversal cost. Given this, they present a solution technique for the long range planning problem which relies on a Snell's Law heuristic to limit a graph search for the optimal solution
Efficient captionbased retrieval of multimedia information by
Neil C Rowe(
)
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
We describe MARIE1 and MARIE2, information retrieval systems for multimedia data. They exploit captions on the data and perform naturallanguage processing of them and English retrieval requests. Some content analysis of the data is also Performed to obtain additional descriptive information. The key to getting this approach to work is sufficient fast processing. We achieve this by decomposing the problem into information filters and applying a new theory of optimal information filtering which we have developed
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
We describe MARIE1 and MARIE2, information retrieval systems for multimedia data. They exploit captions on the data and perform naturallanguage processing of them and English retrieval requests. Some content analysis of the data is also Performed to obtain additional descriptive information. The key to getting this approach to work is sufficient fast processing. We achieve this by decomposing the problem into information filters and applying a new theory of optimal information filtering which we have developed
Semiautomatic deabbreviation of source programs by
Neil C Rowe(
)
1 edition published in 1994 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/semiautomaticdea00rowe
1 edition published in 1994 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/semiautomaticdea00rowe
Antisampling for Estimation: An Overview by
Neil C Rowe(
Book
)
3 editions published in 1984 in English and held by 2 WorldCat member libraries worldwide
We survey a new way to get quick estimates of the values of simple statistics (like count, mean, standard deviation, maximum, median, and mode frequency) on a large data set. This approach is a comprehensive attempt (apparently the first) to estimate statistics without any sampling, by reasoning about various sets containing a population interest. Our antisampling techniques have connections to those of sampling (and have duals in many cases), but they have different advantages and disadvantages, making antisampling sometimes preferable to sampling, sometimes not. In particular, they can only be efficient when data is in a computer, and they exploit computer science ideas such as production systems and database theory. Antisampling also requires the overhead of construction of an auxiliary structure, a database abstract . Tests on sample data show similar or better performance than simple random sampling. We also discuss more complex methods of sampling and their disadvantages
3 editions published in 1984 in English and held by 2 WorldCat member libraries worldwide
We survey a new way to get quick estimates of the values of simple statistics (like count, mean, standard deviation, maximum, median, and mode frequency) on a large data set. This approach is a comprehensive attempt (apparently the first) to estimate statistics without any sampling, by reasoning about various sets containing a population interest. Our antisampling techniques have connections to those of sampling (and have duals in many cases), but they have different advantages and disadvantages, making antisampling sometimes preferable to sampling, sometimes not. In particular, they can only be efficient when data is in a computer, and they exploit computer science ideas such as production systems and database theory. Antisampling also requires the overhead of construction of an auxiliary structure, a database abstract . Tests on sample data show similar or better performance than simple random sampling. We also discuss more complex methods of sampling and their disadvantages
Instructions for use of the Metutor meansends tutoring system by
Neil C Rowe(
)
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/instructionsforu00rowe
1 edition published in 1993 in Undetermined and held by 2 WorldCat member libraries worldwide
http://archive.org/details/instructionsforu00rowe
Exploiting captions in retrieval of multimedia data by
Neil C Rowe(
)
1 edition published in 1992 in Undetermined and held by 2 WorldCat member libraries worldwide
Descriptive naturallanguage captions can help organize multimedia data. We described our MARIE system that interprets English queries directing the fetch of media objects. it is novel in the extent to which it exploits previously interpreted and indexed English captions for the media objects. Our routine filtering of queries through descriptivelycomplex captions (as opposed to keyword lists) before retrieving data can actually improve retrieval speed, as media data are often bulky and time consuming to retrieve, difficult upon which to perform content analysis, and even small improvements to query prevision can often pay off. Handling the English of captions and queries about them is not as difficult as it might seems, as the matching does not require deep understanding, just a comprehensive type hierarchy for caption concepts. An important innovation of MARIE is supercaptions describing sets of captions, which can minimize caption redundancy. Databases, Naturallanguage, Captions, Multimedia
1 edition published in 1992 in Undetermined and held by 2 WorldCat member libraries worldwide
Descriptive naturallanguage captions can help organize multimedia data. We described our MARIE system that interprets English queries directing the fetch of media objects. it is novel in the extent to which it exploits previously interpreted and indexed English captions for the media objects. Our routine filtering of queries through descriptivelycomplex captions (as opposed to keyword lists) before retrieving data can actually improve retrieval speed, as media data are often bulky and time consuming to retrieve, difficult upon which to perform content analysis, and even small improvements to query prevision can often pay off. Handling the English of captions and queries about them is not as difficult as it might seems, as the matching does not require deep understanding, just a comprehensive type hierarchy for caption concepts. An important innovation of MARIE is supercaptions describing sets of captions, which can minimize caption redundancy. Databases, Naturallanguage, Captions, Multimedia
Proceedings of the fourth ACM conference on Digital libraries by Digital Libraries'99(
Book
)
2 editions published in 1999 in English and held by 1 WorldCat member library worldwide
2 editions published in 1999 in English and held by 1 WorldCat member library worldwide
Using local optimality criteria for efficient information retrieval with redundant information filters by
Neil C Rowe(
Book
)
3 editions published in 1994 in English and held by 1 WorldCat member library worldwide
We consider information retrieval when the data, for instance multimedia, is computationally expensive to fetch. Our approach uses information filters to considerably narrow the universe of possibilities before retrieval. Then decisions must be made about the necessity, order, and concurrent processing of proposed filters (an execution plan). We develop simple polynomialtime local criteria for optimal execution plans, and show that most forms of concurrency are suboptimal with information filters. Although the general problem of finding an optimal execution plan is likely exponential in the numbers of filters, we show experimentally that our local optimality criteria, used in a polynomialtime algorithm, nearly always find the global optimum with 15 filters or less, sufficient number of filters for most applications. Our methods do not require special hardware and avoid the high processor idleness that is characteristic of massiveparallelism solutions to this problem. We apply our ideas to an important application, information retrieval of captioned data using naturallanguage understanding, a problem for which the naturallanguage processing can be the bottleneck if not implemented well. Filters, Optimization, Queries, Conjunction, Boolean algebra, Natural lan guage
3 editions published in 1994 in English and held by 1 WorldCat member library worldwide
We consider information retrieval when the data, for instance multimedia, is computationally expensive to fetch. Our approach uses information filters to considerably narrow the universe of possibilities before retrieval. Then decisions must be made about the necessity, order, and concurrent processing of proposed filters (an execution plan). We develop simple polynomialtime local criteria for optimal execution plans, and show that most forms of concurrency are suboptimal with information filters. Although the general problem of finding an optimal execution plan is likely exponential in the numbers of filters, we show experimentally that our local optimality criteria, used in a polynomialtime algorithm, nearly always find the global optimum with 15 filters or less, sufficient number of filters for most applications. Our methods do not require special hardware and avoid the high processor idleness that is characteristic of massiveparallelism solutions to this problem. We apply our ideas to an important application, information retrieval of captioned data using naturallanguage understanding, a problem for which the naturallanguage processing can be the bottleneck if not implemented well. Filters, Optimization, Queries, Conjunction, Boolean algebra, Natural lan guage
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
 Rrushi, Julian
 Fox, Edward A. (Edward Alan) 1950 Editor
 Association for Computing Machinery Special Interest Group on Information Retrieval
 Association for Computing Machinery Special Interest Group on on Hypertext, Hypermedia and Web
 Association for Computing Machinery Special Interest Group on Hypertext, Hypermedia and Web
 ACM Digital Library
 NAVAL POSTGRADUATE SCHOOL MONTEREY CA
 Springer International Publishing AG Publisher
 Stanford University Computer Science Department
 NAVAL POSTGRADUATE SCHOOL MONTEREY CA Dept. of COMPUTER SCIENCE
Associated Subjects
Artificial intelligence Artificial intelligenceData processing Artificial intelligenceStudy and teaching (Higher) Computer networks Computer science Computer security Database management Databases Data encryption (Computer science) Digital libraries Expert systems (Computer science) Information storage and retrieval systems Interactive computer systems Lattice paths LibrariesAutomation Mathematical analysis Navigation Network analysis (Planning) Online databases Problem solving Programming languages (Electronic computers)Semantics Prolog (Computer program language) Questionanswering systems RoboticsMilitary applications StatisticsData processing Vehicles, Remotely piloted