Conservation of Information in Search: Measuring the Cost of Success

  • Thread starter Thread starter buffalo
  • Start date Start date
Status
Not open for further replies.
B

buffalo

Guest
Conservation of Information in Search: Measuring the Cost of Success - peer reviewed challenge to darwinism

Abstract—Conservation of information theorems indicate that
any search algorithm performs, on average, as well as random
search without replacement unless it takes advantage of
problem-specific information about the search target or the
search-space structure. Combinatorics shows that even a moderately
sized search requires problem-specific information to be
successful. Computers, despite their speed in performing queries,
are completely inadequate for resolving even moderately sized
search problems without accurate information to guide them. We
propose three measures to characterize the information required
for successful search: 1) endogenous information, which measures
the difficulty of finding a target using random search; 2) exogenous
information, which measures the difficulty that remains
in finding a target once a search takes advantage of problemspecific
information; and 3) active information, which, as the difference
between endogenous and exogenous information, measures
the contribution of problem-specific information for successfully
finding a target. This paper develops a methodology based on
these information measures to gauge the effectiveness with which
problem-specific information facilitates successful search. It then
applies this methodology to various search tools widely used in
evolutionary search.
 
Abstract—Conservation of information theorems indicate that any search algorithm performs, on average, as well as random search without replacement
Correct. This is the No Free Lunch (NFL) theorem: averaged over all possible search spaces any algorithm will perform as well as random search.

Hence in half of all possible search spaces a given algorithm will perform the same as or worse than random search and in the other half the same algorithm will perform the same as or better than random search.

The NFL theorem also puts a constraint on the search space: the search space cannot change based on the progress of the search - any changes over time have to be independent of the progress of the search.
unless it takes advantage of problem-specific information about the search target or the search-space structure.
Or if it just happens to be in the lucky 50% or so of search spaces where the algorithm in question does better than random search; not a high hurdle to cross.
It then applies this methodology to various search tools widely used in evolutionary search.
Which is a big problem, because the evolutionary mechanism changes the search space as the search progresses based in part on the progress of the search. Hence the NFL theorems do not apply to evolutionary algorithms in real life due to the feedback between the environemnt and the progress of the search. See William Dembski’s treatment of the No Free Lunch theorems is written in jello:Indeed, throughout there is a marked elision of the formal details of the biological processes under consideration. Perhaps the most glaring example of this is that neo-Darwinian evolution of ecosystems does not involve a set of genomes all searching the same, fixed fitness function, the situation considered by the NFL theorems. Rather it is a co-evolutionary process. Roughly speaking, as each genome changes from one generation to the next, it modifies the surfaces that the other genomes are searching. And recent results indicate that NFL results do not hold in co-evolution.
The author of this piece, David Wolpert, is also one of the authors of the paper establishing the NFL theorems: Wolpert, D.H., Macready, W.G. (1997), “No Free Lunch Theorems for Optimization,” IEEE Transactions on Evolutionary Computation. Straight from the horse’s mouth.

rossum
 
Status
Not open for further replies.
Back
Top