Probably, but if you really are implementing pattern X, then there should be some common neural feature that is present every time you implement X. By “process” I mean some pathway that occurs in the brain. I don’t intend it to mean the entire state of the brain at any given time. There may be a lot of “noise” that makes finding that feature under different circumstances very difficult, but it should be there. If there is no common feature, then it would follow that we do not really recognize patterns after all, because there is nothing that is conserved between different instances of recognition of the same pattern. For instance, if I calculate 2+2 and 5+7, the state of the brain will be different because the numbers being summed are different, but since I am “doing addition” in both cases there should be a “doing addition” pathway that can be teased out from everything else, at least in principle.
Re “there should be some common neural feature”, I’d like to see experimental confirmation rather than just assume. A computer program would be designed to parse a string and recognize a plus sign as meaning addition if there are digits on either side, but we don’t design our mind, the paths are learned, so perhaps 2+2 and 5+7 take different paths. For instance, if you’re an American carpenter, 5+7 might make you think 1 foot rather than 12 inches.
*Right, but that would be an epistemological limitation and not an “in principle” one. There would be a conserved process even if it is not knowable by the human mind (it would be knowable in itself though).
This seems to me to be reasonable. But like I said above, if we really are recognizing patterns, and pattern recognition really is explained without remainder by neuroscience, then there has to be some neural process that is conserved between different instances of recognition of the same pattern.*
Again I’d want evidence. When we first learn to drive or speak another language, we have to make a lot of conscious effort, since the paths are being learned. As they get laid down, they become more automatic. But at some point we may realize a connection between two things we learned separately, and the paths change again. That may make us confused about concepts we thought we understood, as if we’ve gone backwards (in line with what I think educators refer to as the learning cycle).
Well the hypothesis we are currently considering is that mental activity is completely explained by neural data. We also have the empirical observation that humans recognize patterns. The prediction that the hypothesis would make is that we would expect to have a conserved neural pattern every time the pattern is recognized. If such a conserved neural process could not in principle exist determinately (i.e. each and every pattern gets a process, however simple or complex, that is uniquely identified with that pattern and no other), then the hypothesis is falsified by the evidence. If this hypothesis is not adequate, could you suggest something that the above hypothesis would suggest that we both can agree on?
As above, I’m not convinced about these conserved neural processes. The basic issue is that we can’t design how we reason, the processes are learned, which means they may change.
Perhaps you need a less reductionist approach

.
The hypothesis I would like to defend is that mental activity cannot be completely explained by neural data alone and requires some non-physical faculty to explain pattern recognition.
OK. A usual way of proceeding is that H[sub]1[/sub] proposes a link between two things, while H[sub]0[/sub] always represents the default position of denying any such link. (H[sub]0[/sub] is the null hypothesis, H[sub]1[/sub] is the alternate hypothesis).
For instance, H[sub]1[/sub] proposes that a new drug cures a disease, while H[sub]0[/sub] denies any connection between the drug and a cure. H[sub]1[/sub] needs to make a prediction, for instance that the drug will have a statistically significant effect compared to a placebo, and it stands or falls by the outcome.
Now I take it we agree there is a link between mind and brain - the question is whether there’s an additional link, to your non-physical faculty.
What you may want for H[sub]1[/sub] sounds similar to the proposal for dark matter, in the sense that it too states there are phenomena which can’t be explained by known, observable matter, and it too predicts the presence of some unknown factor, which it labels dark matter. H[sub]0[/sub], as always, denies any link, so H[sub]1[/sub] stands or falls on the detection of dark matter, for which there are various candidates.
The difference is that the dark matter hypothesis was only proposed
after all other attempts to explain observations failed, while you want to justify your hypothesis by finding something you hope can’t otherwise be explained. I’d worry that your concern isn’t to explain but instead is to justify your hypothesis. That may predispose you to not search hard enough for explanations, so as soon as you publish you’ll get shot down in flames by people who looked harder (c.f. the history of supposedly irreducible complexity).
But let’s go back a step. Does hylomorphism require a non-physical faculty? (How then could you distinguish it from Cartesian dualism?)