Strictly, the model is a set of mathematical equations. The plot diagram is the output of the model. Again, I would rephrase, “We see a model which mirrors an aspect of reality reasonably accurately.” Remember, that for any given set of points there are an infinite number of increasingly complex mathematical models which can match that set. Science picks the simplest possible model, Occam’s razor, but there are always alternatives. If the cubic equation is good enough, then science will ignore to quartic or quintic equations as being unnecessarily complex. There are always alternative models available.
True – we have a model that mirrors an aspect of reality. That says something about the model (it’s good for learning something about reality) – and about reality itself (that aspect of reality is “modellable” using that method).
We bring some expectations to the model first – that’s how we choose one versus an infinite number of others. We think the model will reflect reality. But we can be surprised by the results.
When that happens, we ask “why is this aspect of reality consistent (or not) with the model”? It could be that the model is not good. Or it could be that there is something in reality that causes the results we see. That’s where the Design argument focuses.
But we do tweak the underlying mathematical model, to generate a new plot diagram.
Agreed. It’s a process of building a model, testing it with observations from reality (to see if the data “fits” what we predicted), and then tweaking the model to improve it. So we analyze the model – and we analyze reality.
How? If the model matches, then we make more predictions. If the model does not match, then we tweak it (for small mismatches) of move to a different model (large mismatches).
If our observations match what the model predicted, then we don’t change the model but keep it in place. But we are not trying to analyze the model as much as we’re analyzing what we see in reality. The bigger point is not that we observed things happening – and it’s not even that we predicted things. The point that tends to get lost is that we need to ask “why does this model/simulation/prediction processs work”? Why is reality like this? The fact that we can predict things is very good but that information can tell us something about underlying causes.
Now, if the model doesn’t work, there are more problems. First, we can’t re-draw the target after we fired the arrows (in archery terms). So, we can tweak the model but that will only help for the future, not what happened in the past.
Secondly, do we really know if the model didn’t match? When we see data that doesn’t match the model – do we change the model or do we keep sampling data believing that the model will be more accurate in the future?
Third, Occam’s Razor does apply because you could possibly make a model that is so complex that it mirrors reality correctly in the past but is impractical for prediction in the future. For example, a statistical model for a card game in a casino you wouldn’t get details like – “On a Thursday, June 28, with outside temperature 78 degrees, 237 people in the building, the dealer distributed 75 cards in 3 hours and a man from Georgia won $150 dollars.” There are too many details and not enough to generalize about a result. So building a “useful” model requires some intuition and subjectivity – it’s not pure science and math. We have to agree on what it means to say “it works” or “it’s useful”.
We have to start with certain assumptions – and we import those assumptions into our interpretation of the data.
Have you ever seen
Giant’s Causeway?
Interesting. I haven’t seen that before but it’s a good example of how we could use all of these aspects from the Design Argument to try to understand something about reality. Another good example would be crop-circles. We look for clues to try to figure out their origin.
Adams’ example is more subtle than that. He says “Each puddle exactly fits the hole it is in, no matter what shape the hole is.” Obviously this is easily explainable using the physics of liquids under gravity. Adams’ example then takes this further. The puddle itself reasons from the exact fit, that the hole must have been specially created just for it. That is not correct. The exact fit is correct, but it is not due to design, it is due to natural forces. The analogy with some people who extrapolate the design of the universe is a good one.
I think this destroys any possibility for understanding design at all though. It’s not predictive of anything since every possible outcome will “fit” the model (or the model will match every observation). It doesn’t answer what caused the hole or why the water filled the hole. Is that puddle statistically interesting or is random?
The good thing about bringing assumptions and expectations to the analysis is that you have to start with those – before you actually observe something, or at least to apply to future results.
We would use whatever method was appropriate. What I am saying is that “There is a model of X,” tells us nothing about whether X was designed or not. Other techniques are required.
True – and good point. The model is just one part of the analysis and it may be completely the wrong choice, even though the results look convincing.