Definition VERSUS beliefs

  • Thread starter Thread starter PseuTonym
  • Start date Start date
Status
Not open for further replies.
P

PseuTonym

Guest
Let us suppose that some conjecture is to be resolved. In other words, there is a question, and we want an answer. What guarantee is there that our knowledge (of the conjecture or question) will somehow supply us with all of the beliefs or assumptions or premises that we need to either deduce that the answer to the question is “no”, or deduce that the answer to the question is “yes”?

My answer is that there is no such guarantee. In other words, there is an important distinction between a definition and a list of beliefs. A belief is not merely part of a definition.

For example, consider a question of the form: “does this particular Turing machine halt?”

How the Turing machine functions can be fully understood by us, and we might also fully understand what it means to say that a Turing machine halts. However, if the truth of the matter is that the particular Turing machine does not halt, then how do we know that we have an adequate basis of premises to justify the conclusion that the Turing machine does not halt?

Maybe we simply do not have a powerful enough apparatus to reach the conclusion, even if we were to have an infinite amount of ingenuity and creativity in applying the existing apparatus. We cannot resolve the question empirically by waiting and observing that it halts if it happens to be a Turing machine that does not halt.

Understanding a question does not necessarily imply having powerful enough premises to justify any particular answer to the question. If definitions are provided to us, then we might understand the question, but it could be that no definitions would ever allow us to justify any particular answer to the question.
 
Let us suppose that some conjecture is to be resolved. In other words, there is a question, and we want an answer. What guarantee is there that our knowledge (of the conjecture or question) will somehow supply us with all of the beliefs or assumptions or premises that we need to either deduce that the answer to the question is “no”, or deduce that the answer to the question is “yes”?
The first point to recognize – which is critical in reaching an understanding of the question you’re asking here – is that not all questions operate in the same domain. Therefore, not all questions are answerable by one set of methods. For instance, in order to answer a question about a mathematical assertion, a person uses arithmetic theorem and proof by various methods (e.g., induction, etc). On the other hand, in order to answer a question about the physical universe, one utilizes a variety of empirical methods (observation, the scientific method, etc). Some questions operate in a different context – for example, if you walked into a murder trial and attempted to use the Pythagorean theorem to prove the guilt or innocence of the defendant, you’d get a lot of weird looks from the officers of the court. Rather, they would expect you to present evidence – regarding the crime that had been alleged to have been committed, and demonstrating motive or intent or opportunity – and then the appropriate people would judge the soundness of your case against the case of the other side.

Have you ever read John Cardinal Newman’s An Essay in aid of a Grammar of Assent? His discussion of the illative sense is precisely what you’re asking about here.
For example, consider a question of the form: “does this particular Turing machine halt?”
You’re really asking about how the machine operates over an arbitrary (name removed by moderator)ut, right?

The question, then, isn’t in the class of questions of mathematical theorem or of Newman’s illative sense. Rather, it’s one of those questions which admit of empirical evidence. Interestingly, you’ve acknowledged that even these questions – which, intuitively, should have algorithms that invariably lead to answers! – might not have answers that we can discover a priori!
We cannot resolve the question empirically by waiting and observing that it halts if it happens to be a Turing machine that does not halt.
Precisely. Yet, not all questions operate in this domain.
Understanding a question does not necessarily imply having powerful enough premises to justify any particular answer to the question. If definitions are provided to us, then we might understand the question, but it could be that no definitions would ever allow us to justify any particular answer to the question.
For which domain of questions are you making these assertions?
 
Status
Not open for further replies.
Back
Top