# Gay, David M.

Overview
Works: 53 works in 166 publications in 1 language and 856 library holdings Software  Handbooks and manuals Author
Publication Timeline
.
Most widely held works by David M Gay
AMPL : a modeling language for mathematical programming by Robert Fourer( Book )

49 editions published between 1993 and 2009 in English and held by 397 WorldCat member libraries worldwide

An adaptive nonlinear least-squares algorithm by J. E Dennis( )

12 editions published between 1977 and 1980 in English and held by 68 WorldCat member libraries worldwide

NL2SOL is a modular program for solving nonlinear least-squares problems that incorporates a number of novel features. It maintains a secant approximation S to the second-order part of the least-squares Hessian and adaptively decides when to use this approximation. S is 'sized' before updating, something which is similar to Oren-Luenberger scaling. The step choice algorithm is based on minimizing a local quadratic model of the sum of squares function constrained to an elliptical trust region centered at the current approximate minimizer. This is accomplished using ideas discussed by More, together with a special module for assessing the quality of the step thus computed. These and other ideas behind NL2SOL are discussed and its evolution and current implementation are also described briefly. (Author)
AMPL : a modeling language for mathematical programming : with AMPL Plus student edition for microsoft windows by Robert Fourer( Book )

12 editions published between 1993 and 1997 in English and held by 58 WorldCat member libraries worldwide

Some convergence properties of Broyden's method by David M Gay( )

4 editions published in 1977 in English and held by 58 WorldCat member libraries worldwide

In 1965 Broyden introduced a family of algorithms called(rank-one) quasi-New-ton methods for iteratively solving systems of nonlinear equations. We show that when any member of this family is applied to an n x n nonsingular system of linear equations and direct-prediction steps are taken every second iteration, then the solution is found in at most 2n steps. Specializing to the particular family member known as Broyden - good) method, we use this result to show that Broyden's method enjoys local 2n-step Q-quadratic convergence on nonlinear problems
Solving systems of nonlinear equations by Broyden's method with projected updates by David M Gay( )

3 editions published in 1977 in English and held by 56 WorldCat member libraries worldwide

We introduce a modification of Broyden's method for finding a zero of n nonlinear equations in n unknowns when analytic derivatives are not available. The method retains the local Q-superlinear convergence of Broyden's method and has the additional property that if any or all of the equations are linear, it locates a zero of these equations in n+1 or fewer iterations. Limited computational experience suggests that our modification often improves upon Eroyden's method
On Modifying Singular Values to Solve Possible Singular Systems of Non-Linear Equations( )

2 editions published in 1976 in English and held by 55 WorldCat member libraries worldwide

We show that if a certain nondegeneracy assumption holds, it is possible to guarantee the existence of a solution to a system of nonlinear equations f(x) = 0 whose Jacobian matrix J(x) exists but maybe singular. The main idea is to modify small singular values of J(x) in such away that the modified Jacobian matrix J(x) has a continuous pseudoinverse J+(x)and that a solution x of f(x) = 0 may be found by determining an asymptote of the solution to the initial value problem x(0) = x[sub0}, x¿h Ø?0@1A0?(Øt) = -J+(x)f(x). We briefly discuss practical (algorithmic) implications of this result. Although the nondegeneracy assumption may fail for many systems of interest (indeed, if the assumption holds and J(x) is non-singular, then x is unique), algorithms using(x) may enjoy a larger region of convergence than those that require(an approximation to) J[to the -1 power[(x)
Representing Symmetric Rank Two Updates by David M Gay( )

1 edition published in 1976 in English and held by 41 WorldCat member libraries worldwide

Various quasi-Newton methods periodically add a symmetric "correction" matrix of rank at most 2 to a matrix approximating some quantity A of interest (such as the Hessian of an objective function). In this paper we examine several ways to express a symmetric rank 2 matrix [delta] as the sum of rank 1 matrices. We show that it is easy to compute rank 1 matrices [delta1] and [delta2] such that [delta] = [delta1] + [delta2] and [the norm of delta1]+ [the norm of delta2] is minimized, where ||.|| is any inner product norm. Such a representation recommends itself for use in those computer programs that maintain A explicitly, since it should reduce cancellation errors and/or improve efficiency over other representations. In the common case where [delta] is indefinite, a choice of the form [delta1] = [delta2 to the power of T] = [xy to the power of T] appears best. This case occurs for rank 2 quasi- Newton updates [delta] exactly when [delta] may be obtained by symmetrizing some rank 1 update; such popular updates as the DFP, BFGS, PSB, and Davidon's new optimally conditioned update fall into this category
Computing optimal locally constrained steps by David M Gay( Book )

6 editions published between 1979 and 1980 in English and held by 8 WorldCat member libraries worldwide

In seeking to solve an unconstrained minimization problem, one often computes steps based on a quadratic approximation q to the objective function. A reasonable way to choose such steps is by minimizing q constrained to a neighborhood of the current iterate. This paper considers ellipsoidal neighborhood and presents a new way to handle certain computational details when the Hessian of q is indefinite, paying particular attention to a special case which may then arise. The proposed step computing algorithm provides an attractive way to deal with negative curvature. Implementations of this algorithm have proved very satisfactory in the nonlinear least-squares solver NL2SOL. (Author)
On solving robust and generalized linear regression problems by David M Gay( Book )

5 editions published between 1979 and 1980 in English and held by 8 WorldCat member libraries worldwide

Many researchers employ mathematical models. Most models contain parameters, which may be chosen to make the model fit the available data as well as possible (in a sense that depends on the model). In this paper we consider the problem of choosing the parameters for a common class of models in which the desired parameter vector minimizes an (unconstrained) objective function. We briefly give some examples of such problems, then discuss ways to exploit the common structure that these problems share. THis leads us to discussing strategies for solving general unconstrained minimization problems and to point out the advantages of using a so-called 'model/trust-region approach, ' wherein the change made in the current parameter estimate is chosen so as to approximately minimize a local model of the objective function on an estimate of the region about the current iterate where this local model is reliable. For problems in which the residual vector r(x) is a nonlinear function of x, we recommend generalizations of some techniques that have proven worthwhile in nonlinear least-squares problems in which the optimal residual vector r(x*) may be either large or small
Brown's method and some generalizations, with applications to minimization problems by David M Gay( Book )

6 editions published between 1975 and 1985 in English and held by 8 WorldCat member libraries worldwide

Newton's method attempts to find a zero of $f \in C[superscript]{1}(IR[superscript]{n})$ by taking a step which is intended to make all components of $f$ vanish at once. In this respect Newton's method processes the components of $f$ in parallel. Contrasting to this, Brown's method and the generalizations thereof considered in this thesis process the components of $f$ serially, one after another. One major iteration of these methods may be described as follows: given the starting point (i.e. current major iterate) $y_{0}$, linearize the first component $f_{1}$ of $f$ at $y_{0}$ and find a point $y_{1}$ in the $n-1$ dimensional hyperplane $H_{1}$ on which this linearization vanishes; in general, having found a point $y_{k} (1 \leq k <n)$ in the $n-k$ dimensional hyperplane $H_{k}$ on which the heretofore constructed linearizations vanish, restrict $f_{k+1}$ to $H_{k}$, linearize this restriction at $y_{k}$, and find a point $y_{k+1}$ in the $n-(k+1)$ dimensional hyperplane $H_{k+1}$ on which this linearization vanishes; stop when $Y_{n}$ has been found and let $y_{n}$ be the next major iterate. When $f$ is a general nonlinear function and finite differences are used to construct the linearizations, this approach must do work equivalent to approximating only about half the components of $f'$ and thus requires only about half as many function evaluations per major iteration as the corresponding finite difference Newton's method, while still enjoying the same rate of local convergence
Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis : version 4.0 developers manual( )

1 edition published in 2006 in English and held by 6 WorldCat member libraries worldwide

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes
AMPL : a modeling language for mathematical programming : using the AMPL student edition under MS-DOS by Robert Fourer( Book )

5 editions published in 1993 in English and held by 6 WorldCat member libraries worldwide

DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis : version 4.0 uers's manual( )

1 edition published in 2006 in English and held by 6 WorldCat member libraries worldwide

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies
Implementing Brown's method by David M Gay( Book )

4 editions published in 1975 in English and Undetermined and held by 6 WorldCat member libraries worldwide

Features illustrated, step-by-step demonstrations of how to create natural and man-made textures using colored pencils; looks at the techniques involved in working with colored pencils; and includes information on materials and tools
AMPL : a modeling language for mathematical programming( Book )

1 edition published in 1993 in English and held by 6 WorldCat member libraries worldwide

DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis : version 4.0 reference manual( )

1 edition published in 2006 in English and held by 6 WorldCat member libraries worldwide

The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications
AMPL : a mathematical programming language by Robert Fourer( Book )

5 editions published between 1987 and 1989 in English and held by 5 WorldCat member libraries worldwide

On convergence testing in model/trust-region algorithms for unconstrained optimization by David M Gay( Book )

2 editions published in 1982 in English and held by 4 WorldCat member libraries worldwide

On Scolnik's proposed polynomial-time linear programming algorithm by David M Gay( Book )

3 editions published in 1973 in English and held by 3 WorldCat member libraries worldwide

At a recent symposium, Hugo Sccolnik expressed some ideas leading to an algorithm which he thought might solve the linear programming problem in polynomial time. We examine the algorithm and find that it often fails to solve the linear programming problem, even in the special cases considered by Scolnik. We conclude that the algorithm probably cannot be modified to work properly
AMPL : a modeling language for mathematical programming by Robert Fourer( )

1 edition published in 1994 in English and held by 3 WorldCat member libraries worldwide

more
fewer
Audience Level
 0 1 Kids General Special

Related Identities
Covers
Languages
English (123)