Quote:
Mark Murray wrote:
When you're called on the research and it invalidates your theory, you reply that it's "a bad analysis" and "some kinks remain"? ... Why didn't you point it out?
|
I believe that the phrase "kinks remain" WAS -fairly - pointing that out. None of the "kinks" affect my analysis.
You want application-- Great. So do I. Go get some... I want understanding also -- so I can extend both my application and all applications.
Quote:
Why did you use bad research? Why do you believe you know more than a published author in a known journal?
|
The latter is a fallacy of appeal to authority -- the fact is that when an analysis of the model is objectively in error -- the fact is, it is in error. I just showed you the error he made in noting the nature of the supports in one plane but neglecting it in another plane -- it is not a matter of opinion.
But in fairness to his research issue, it was probably not a necessary condition to consider, either. The distinction matters for what I am doing -- but likely not for what he was doing. That's another reason we have to question the uses of any research or other source of "authority" it may not be well-fitted to our problem-- but that doesn't mean it is useless.
An item of bad analysis in a piece of research does not invalidate the whole thing. Galileo was not wrong conceptually or empirically about the nature of gravity just because he had not calculated the value of G -- he was just less precise. The thing about science is that there really is no "bad" research, and there is certainly no such thing as perfect research. Negative results are as valuable -- often more valuable --than results that confirm the hypothesis. Very often a key error -- once seen for what it is by good critical analysis -- can be of more value than either the premise or the result of the research. It's called serendipity.
Before you can go doing experiments or decide what data to gather or how -- first you have to look at things like this and think through the problems conceptually - or you have no idea what to go test for or what data to try and capture. You don't just pile up arbitrary measurements (or anecdotal reports or subjective impressions) and hope they tell you something. They can help frame your working concepts, however.