Luby, Michael George
Overview
Works:  66 works in 142 publications in 3 languages and 1,001 library holdings 

Genres:  Conference papers and proceedings 
Roles:  Author, Editor, Creator, Contributor 
Publication Timeline
.
Most widely held works by
Michael George Luby
Randomization and approximation techniques in computer science : second international workshop, RANDOM '98, Barcelona, Spain,
October 810, 1998 : proceedings by
Michael George Luby(
Book
)
24 editions published between 1998 and 1999 in English and Italian and held by 408 WorldCat member libraries worldwide
This book constitutes the refereed proceedings of the Second International Workshop on Randomization and Approximation Techniques in Computer Science, RANDOM'98, held in Barcelona, Spain, in October 1998. The 26 revised full papers presented were carefully reviewed and selected for inclusion in the proceedings. Also included are three invited contributions. Among the topics addressed are graph computation, derandomization, pattern matching, computational geometry, approximation algorithms, search algorithms, sorting, and networking algorithms
24 editions published between 1998 and 1999 in English and Italian and held by 408 WorldCat member libraries worldwide
This book constitutes the refereed proceedings of the Second International Workshop on Randomization and Approximation Techniques in Computer Science, RANDOM'98, held in Barcelona, Spain, in October 1998. The 26 revised full papers presented were carefully reviewed and selected for inclusion in the proceedings. Also included are three invited contributions. Among the topics addressed are graph computation, derandomization, pattern matching, computational geometry, approximation algorithms, search algorithms, sorting, and networking algorithms
Pseudorandomness and cryptographic applications by
Michael George Luby(
Book
)
10 editions published in 1996 in English and held by 303 WorldCat member libraries worldwide
"A pseudorandom generator is an easytocompute function that stretches a short random string into a much longer string that "looks" just like a random string to any efficient adversary. One immediate application of a pseudorandom generator is the construction of a private key cryptosystem that is secure against chosen plaintext attack."BOOK JACKET. "There do not seem to be natural examples of functions that are pseudorandom generators. On the other hand, there do seem to be a variety of natural examples of another basic primitive: the oneway function. A function is oneway if it is easy to compute but hard for any efficient adversary to invert on average."BOOK JACKET. "The first half of the book shows how to construct a pseudorandom generator from any oneway function. Building on this, the second half of the book shows how to construct other useful cryptographic primitives, such as private key cryptosystems, pseudorandom function generators, pseudorandom permutation generators, digital signature schemes, bit commitment protocols, and zeroknowledge interactive proof systems. The book stresses rigorous definitions and proofs."BOOK JACKET
10 editions published in 1996 in English and held by 303 WorldCat member libraries worldwide
"A pseudorandom generator is an easytocompute function that stretches a short random string into a much longer string that "looks" just like a random string to any efficient adversary. One immediate application of a pseudorandom generator is the construction of a private key cryptosystem that is secure against chosen plaintext attack."BOOK JACKET. "There do not seem to be natural examples of functions that are pseudorandom generators. On the other hand, there do seem to be a variety of natural examples of another basic primitive: the oneway function. A function is oneway if it is easy to compute but hard for any efficient adversary to invert on average."BOOK JACKET. "The first half of the book shows how to construct a pseudorandom generator from any oneway function. Building on this, the second half of the book shows how to construct other useful cryptographic primitives, such as private key cryptosystems, pseudorandom function generators, pseudorandom permutation generators, digital signature schemes, bit commitment protocols, and zeroknowledge interactive proof systems. The book stresses rigorous definitions and proofs."BOOK JACKET
Pairwise independence and derandomization by
Michael George Luby(
)
10 editions published between 1995 and 2007 in English and held by 68 WorldCat member libraries worldwide
The article is based on a series of lectures given by the authors in 1995, where the notes were scribed by the attending students. (The detailed list of scribes and other contributors can be found in the Acknowledgements section at the end of the manuscript.) The current version is essentially the same, with a few minor changes. We note that this publication takes place a decade after the lectures were given. Much has happened in the area of pseudorandomness and derandomization since, and perhaps a somewhat different viewpoint, different material, and different style would be chosen were these lectures given today. Still, the material presented is self contained, and is a prime manifestation of the "derandomization" paradigm. The material does lack references to newer work though. We recommend the reader interested in randomness, derandomization and their interplay with computational complexity to consult the following books and surveys, as well as their extensive bibliography
10 editions published between 1995 and 2007 in English and held by 68 WorldCat member libraries worldwide
The article is based on a series of lectures given by the authors in 1995, where the notes were scribed by the attending students. (The detailed list of scribes and other contributors can be found in the Acknowledgements section at the end of the manuscript.) The current version is essentially the same, with a few minor changes. We note that this publication takes place a decade after the lectures were given. Much has happened in the area of pseudorandomness and derandomization since, and perhaps a somewhat different viewpoint, different material, and different style would be chosen were these lectures given today. Still, the material presented is self contained, and is a prime manifestation of the "derandomization" paradigm. The material does lack references to newer work though. We recommend the reader interested in randomness, derandomization and their interplay with computational complexity to consult the following books and surveys, as well as their extensive bibliography
Raptor codes by
Amin Shokrollahi(
)
6 editions published between 2011 and 2014 in English and held by 53 WorldCat member libraries worldwide
The R10 and RQ codes have been continued and will continue to be adopted into a number of standards and thus there are publicly available specifications that describe exactly how to implement these codes. However, the standards' specifications provide no insight into the rationale for the design choices made. One of the primary purposes of this document is to provide this design rationale. We provide results of extensive simulations of R10 and RQ codes to show the behavior of these codes in many different scenarios
6 editions published between 2011 and 2014 in English and held by 53 WorldCat member libraries worldwide
The R10 and RQ codes have been continued and will continue to be adopted into a number of standards and thus there are publicly available specifications that describe exactly how to implement these codes. However, the standards' specifications provide no insight into the rationale for the design choices made. One of the primary purposes of this document is to provide this design rationale. We provide results of extensive simulations of R10 and RQ codes to show the behavior of these codes in many different scenarios
Randomization and Approximation Techniques in Computer Science Second International Workshop, RANDOM'98 Barcelona, Spain,
October 810, 1998 Proceedings by
Michael George Luby(
)
1 edition published in 1998 in English and held by 30 WorldCat member libraries worldwide
1 edition published in 1998 in English and held by 30 WorldCat member libraries worldwide
MonteCarlo methods for estimating system reliability by
Michael George Luby(
Book
)
7 editions published between 1983 and 1984 in English and held by 14 WorldCat member libraries worldwide
The second format for the representation of F can be described as follows. A network is an undirected graph G, where the edges in the graph correspond to the components in the system. Let G be a planar network and let x1 ..., xK be K specified nodes in G. For the planar Kterminal problem, the network is in a failing state if there is no path of working edges between some pair of specified nodes. We develop a Monte Carlo algorithm to estimate Pr[F] for the planar Kterminal problem. The algorithm works especially well when the edge failure probabilities are small. In this case the algorithm produces an estimator Y which is probably close to Pr[F] with high probability in time polynomial in the size of the graph. This compares very favorably with the execution times of other methods used for solving this problem
7 editions published between 1983 and 1984 in English and held by 14 WorldCat member libraries worldwide
The second format for the representation of F can be described as follows. A network is an undirected graph G, where the edges in the graph correspond to the components in the system. Let G be a planar network and let x1 ..., xK be K specified nodes in G. For the planar Kterminal problem, the network is in a failing state if there is no path of working edges between some pair of specified nodes. We develop a Monte Carlo algorithm to estimate Pr[F] for the planar Kterminal problem. The algorithm works especially well when the edge failure probabilities are small. In this case the algorithm produces an estimator Y which is probably close to Pr[F] with high probability in time polynomial in the size of the graph. This compares very favorably with the execution times of other methods used for solving this problem
Optimal parallelization of Las Vegas algorithms by
Michael George Luby(
Book
)
4 editions published in 1993 in English and held by 12 WorldCat member libraries worldwide
Abstract: "Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when it stops but whose running time is a random variable. In [1] a method was developed for minimizing the expected time required to obtain an answer from A using sequential strategies which simulate A as follows: run A for a fixed amount of time t₁, then run A independently for a fixed amount of time t₂, etc. The simulation stops if A completes its execution during any of the runs. In this paper, we consider parallel simulation strategies for this same problem, i.e., strategies where many sequential strategies are executed independently in parallel using a large number of processors. We present a close to optimal parallel strategy for the case when the distribution of A is known. If the number of processors is below a certain threshold, we show that this parallel strategy achieves almost linear speedup over the optimal sequential strategy. For the more realistic case where the distribution of A is not known, we describe a universal parallel strategy whose expected running time is only a logarithmic factor worse than that of an optimal parallel strategy. Finally, the application of the described parallel strategies to a randomized automated theorem prover confirms the theoretical results and shows that in most cases good speedup can be achieved up to hundreds of processors, even on networks of workstations."
4 editions published in 1993 in English and held by 12 WorldCat member libraries worldwide
Abstract: "Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when it stops but whose running time is a random variable. In [1] a method was developed for minimizing the expected time required to obtain an answer from A using sequential strategies which simulate A as follows: run A for a fixed amount of time t₁, then run A independently for a fixed amount of time t₂, etc. The simulation stops if A completes its execution during any of the runs. In this paper, we consider parallel simulation strategies for this same problem, i.e., strategies where many sequential strategies are executed independently in parallel using a large number of processors. We present a close to optimal parallel strategy for the case when the distribution of A is known. If the number of processors is below a certain threshold, we show that this parallel strategy achieves almost linear speedup over the optimal sequential strategy. For the more realistic case where the distribution of A is not known, we describe a universal parallel strategy whose expected running time is only a logarithmic factor worse than that of an optimal parallel strategy. Finally, the application of the described parallel strategies to a randomized automated theorem prover confirms the theoretical results and shows that in most cases good speedup can be achieved up to hundreds of processors, even on networks of workstations."
On the computational complexity of finding stable state vectors in connectionist models (Hopfield nets) by
Gail H Godbeer(
Book
)
5 editions published between 1987 and 1988 in English and held by 11 WorldCat member libraries worldwide
5 editions published between 1987 and 1988 in English and held by 11 WorldCat member libraries worldwide
MonteCarlo approximation algorithms for enumeration problems by
Richard Alan Karp(
Book
)
3 editions published in 1987 in English and held by 9 WorldCat member libraries worldwide
3 editions published in 1987 in English and held by 9 WorldCat member libraries worldwide
A new MonteCarlo method for estimating the failure probability of an ncomponent system by
Richard M Karp(
Book
)
2 editions published in 1983 in English and held by 7 WorldCat member libraries worldwide
A new formula for the probability of a union of events is used to express the failure probability fo an ncomponent system. A very simple MonteCarlo algorithm based on the new probability formula is presented. The input to the algorithm gives the failure probabilities of the n components of the system and a list of the failure sets of the system. The output is an unbiased estimator of the failure probability of the system. We show that the average value of the estimator over many runs of the algorthm tends to converge quickly to the failure Probability of the system. The overall time to estimate the failure probability with high accuracy compares very favorably with the execution times of other methods used for solving this problem
2 editions published in 1983 in English and held by 7 WorldCat member libraries worldwide
A new formula for the probability of a union of events is used to express the failure probability fo an ncomponent system. A very simple MonteCarlo algorithm based on the new probability formula is presented. The input to the algorithm gives the failure probabilities of the n components of the system and a list of the failure sets of the system. The output is an unbiased estimator of the failure probability of the system. We show that the average value of the estimator over many runs of the algorthm tends to converge quickly to the failure Probability of the system. The overall time to estimate the failure probability with high accuracy compares very favorably with the execution times of other methods used for solving this problem
Grid geometries which preserves properties of Euclidean geometry : a study of graphics line drawing algorithms by
Michael George Luby(
Book
)
3 editions published in 1986 in English and held by 6 WorldCat member libraries worldwide
3 editions published in 1986 in English and held by 6 WorldCat member libraries worldwide
Proceedings of the Workshop "Randomized Algorithms and Computation" : held December 1722, 1995, in Berkeley, California by
Berkeley, Calif.> Workshop Randomized Algorithms and Computation. <1995(
Book
)
3 editions published in 1997 in English and held by 4 WorldCat member libraries worldwide
3 editions published in 1997 in English and held by 4 WorldCat member libraries worldwide
Foundations and Trends : Pairwise Independence and Derandomization(
)
1 edition published in 2006 in English and held by 4 WorldCat member libraries worldwide
1 edition published in 2006 in English and held by 4 WorldCat member libraries worldwide
Efficient PRAM simulation on a distributed memory machine by
Richard M Karp(
Book
)
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
The best previous simulations use a simpler scheme based on hashing and have much larger delay: [theta](log(n)/loglog(n)) for the simulation of an n processor PRAM on an n processor DMM, and [theta](log(n)) in the case where the simulation is timeprocessor optimal."
2 editions published in 1993 in English and held by 4 WorldCat member libraries worldwide
The best previous simulations use a simpler scheme based on hashing and have much larger delay: [theta](log(n)/loglog(n)) for the simulation of an n processor PRAM on an n processor DMM, and [theta](log(n)) in the case where the simulation is timeprocessor optimal."
Approximating the number of solutions of a GF [2] polynomial by
Marek Karpiński(
Book
)
3 editions published in 1990 in German and English and held by 4 WorldCat member libraries worldwide
3 editions published in 1990 in German and English and held by 4 WorldCat member libraries worldwide
Pairwise independence and derandomization by
Michael George Luby(
Book
)
1 edition published in 2006 in English and held by 3 WorldCat member libraries worldwide
1 edition published in 2006 in English and held by 3 WorldCat member libraries worldwide
On removing randomness from a parallel algorithm for minimum cuts by
Michael George Luby(
Book
)
2 editions published in 1993 in Undetermined and English and held by 3 WorldCat member libraries worldwide
Abstract: "The weighted minimum cut problem in a graph is a fundamental problem in combinatorial optimization. Recently, Karger suggested a randomized parallel algorithm for this problem. We show that a similar algorithm can be implemented using only O(log²n) random bits. We also show that our result holds for computing minimum weight kcuts, where k is fixed."
2 editions published in 1993 in Undetermined and English and held by 3 WorldCat member libraries worldwide
Abstract: "The weighted minimum cut problem in a graph is a fundamental problem in combinatorial optimization. Recently, Karger suggested a randomized parallel algorithm for this problem. We show that a similar algorithm can be implemented using only O(log²n) random bits. We also show that our result holds for computing minimum weight kcuts, where k is fixed."
Optimal speedup of Las Vegas algorithms by
Michael George Luby(
Book
)
2 editions published in 1993 in Undetermined and English and held by 3 WorldCat member libraries worldwide
Abstract: "Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when it stops but whose running time is a random variable. We consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t₁, then run A independently for a fixed amount of time t₂, etc. The simulation stops if A completes its execution during any of the runs. Let S = (t₁, t₂ ...) be a strategy, and let l[subscript A] = inf[subscript S]T(A, S), where T(A, S) is the expected value of the running time of the simulation of A under strategy S. We describe a simple universal strategy of S[superscript univ], with the property that, for any algorithm A, T(A, S[superscript univ]) = O (l[subscript A]log (l[subscript A])). Furthermore, we show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy."
2 editions published in 1993 in Undetermined and English and held by 3 WorldCat member libraries worldwide
Abstract: "Let A be a Las Vegas algorithm, i.e., A is a randomized algorithm that always produces the correct answer when it stops but whose running time is a random variable. We consider the problem of minimizing the expected time required to obtain an answer from A using strategies which simulate A as follows: run A for a fixed amount of time t₁, then run A independently for a fixed amount of time t₂, etc. The simulation stops if A completes its execution during any of the runs. Let S = (t₁, t₂ ...) be a strategy, and let l[subscript A] = inf[subscript S]T(A, S), where T(A, S) is the expected value of the running time of the simulation of A under strategy S. We describe a simple universal strategy of S[superscript univ], with the property that, for any algorithm A, T(A, S[superscript univ]) = O (l[subscript A]log (l[subscript A])). Furthermore, we show that this is the best performance that can be achieved, up to a constant factor, by any universal strategy."
Optimal parallelization of Las Vegas algorithms by
Michael George Luby(
Book
)
1 edition published in 1993 in English and held by 2 WorldCat member libraries worldwide
1 edition published in 1993 in English and held by 2 WorldCat member libraries worldwide
Selftesting/correcting with applications to numerical problems by Manuel Blum(
Book
)
2 editions published between 1990 and 1991 in English and held by 2 WorldCat member libraries worldwide
Abstract: "Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f. Should we trust that P works correctly? A selftesting/correcting pair for f allows us to: (1) estimate the probability that P(x) [not equal to] f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than the original running time of P. We present general techniques for constructing simple to program selftesting/correcting pairs for a variety of numerical functions, including integer multiplication, modular multiplication, matrix multiplication, inverting matrices, computing the determinant of a matrix, computing the rank of a matrix, integer division, modular exponentiation and polynomial multiplication."
2 editions published between 1990 and 1991 in English and held by 2 WorldCat member libraries worldwide
Abstract: "Suppose someone gives us an extremely fast program P that we can call as a black box to compute a function f. Should we trust that P works correctly? A selftesting/correcting pair for f allows us to: (1) estimate the probability that P(x) [not equal to] f(x) when x is randomly chosen; (2) on any input x, compute f(x) correctly as long as P is not too faulty on average. Furthermore, both (1) and (2) take time only slightly more than the original running time of P. We present general techniques for constructing simple to program selftesting/correcting pairs for a variety of numerical functions, including integer multiplication, modular multiplication, matrix multiplication, inverting matrices, computing the determinant of a matrix, computing the rank of a matrix, integer division, modular exponentiation and polynomial multiplication."
more
fewer
Audience Level
0 

1  
Kids  General  Special 
Related Identities
Useful Links
Associated Subjects
Algorithms Artificial intelligence Artificial intelligenceData processing Coding theory Combinatorial analysis Combinatorial enumeration problems Combinatorial optimization Computational complexity Computer algorithms Computer graphics Computer programsVerification Computer science Computer scienceMathematics Computer scienceStatistical methods Computer software ComputersReliability Data encryption (Computer science) Data structures (Computer science) Data transmission systems Errorcorrecting codes (Information theory) Mathematical optimization Monte Carlo method Multiprocessors Numbers, Random Polynomials Random number generators Statistics