P
PseuTonym
Guest
Let us suppose that some conjecture is to be resolved. In other words, there is a question, and we want an answer. What guarantee is there that our knowledge (of the conjecture or question) will somehow supply us with all of the beliefs or assumptions or premises that we need to either deduce that the answer to the question is “no”, or deduce that the answer to the question is “yes”?
My answer is that there is no such guarantee. In other words, there is an important distinction between a definition and a list of beliefs. A belief is not merely part of a definition.
For example, consider a question of the form: “does this particular Turing machine halt?”
How the Turing machine functions can be fully understood by us, and we might also fully understand what it means to say that a Turing machine halts. However, if the truth of the matter is that the particular Turing machine does not halt, then how do we know that we have an adequate basis of premises to justify the conclusion that the Turing machine does not halt?
Maybe we simply do not have a powerful enough apparatus to reach the conclusion, even if we were to have an infinite amount of ingenuity and creativity in applying the existing apparatus. We cannot resolve the question empirically by waiting and observing that it halts if it happens to be a Turing machine that does not halt.
Understanding a question does not necessarily imply having powerful enough premises to justify any particular answer to the question. If definitions are provided to us, then we might understand the question, but it could be that no definitions would ever allow us to justify any particular answer to the question.
My answer is that there is no such guarantee. In other words, there is an important distinction between a definition and a list of beliefs. A belief is not merely part of a definition.
For example, consider a question of the form: “does this particular Turing machine halt?”
How the Turing machine functions can be fully understood by us, and we might also fully understand what it means to say that a Turing machine halts. However, if the truth of the matter is that the particular Turing machine does not halt, then how do we know that we have an adequate basis of premises to justify the conclusion that the Turing machine does not halt?
Maybe we simply do not have a powerful enough apparatus to reach the conclusion, even if we were to have an infinite amount of ingenuity and creativity in applying the existing apparatus. We cannot resolve the question empirically by waiting and observing that it halts if it happens to be a Turing machine that does not halt.
Understanding a question does not necessarily imply having powerful enough premises to justify any particular answer to the question. If definitions are provided to us, then we might understand the question, but it could be that no definitions would ever allow us to justify any particular answer to the question.