If you think you understand what’s meant by scientific evidence then this little piece of logic — first pointed out by the philosopher Carl Hempel in 1945 — should give you food for thought.
The “common-sense” view of evidence goes something like this. We start with a hypothesis: for example, “All ravens are black”. We then examine as many ravens as we can find. With every raven we examine that turns out to be black, our confidence in our hypothesis increases a little, until we’re prepared to accept it as a near-certainty. (Of course, if we find a single white raven then our hypothesis has been disproved, but that’s a different matter.) In other words, we consider every black raven that we find to be supporting evidence for our hypothesis.
Here’s the catch. Logically, any statement of the form “A implies B” is equivalent to its contrapositive, “not-B implies not-A”. For example, “All ravens are black” is equivalent to “All things that are not black are not ravens”. So supporting evidence for the first version of our hypothesis must also be supporting evidence for the second version of our hypothesis, and vice versa.
But, if a black raven provided supporting evidence for the statement “All ravens are black”, then any non-black thing that turns out to be not a raven will provide supporting evidence for the statement “All things that are not black are not ravens”. For example, if we observe something pink and it turns out to be a flamingo, this provides supporting evidence that all non-black things are non-ravens — so a pink flamingo is evidence that all ravens are black!
This is bad enough, but it gets worse. Suppose that instead we’d started with the hypothesis “All ravens are green”. This is logically equivalent to “All things that are not green are not ravens”, which is also supported by our observation of a pink flamingo. So the pink flamingo seems to be evidence simultaneously that all ravens are black and that all ravens are green…
As you might expect, philosophers have had a lot of fun with Hempel’s “paradox”, and there is no universally-accepted solution. One approach is to deny that there is any such thing as supporting evidence, and to argue that scientific hypotheses can only be falsified, never confirmed — but this leads us into trouble when we try to explain why one non-falsified theory seems preferable to another non-falsified theory which has “less evidence in its favour”. There are other approaches that try to formulate the idea of supporting evidence in probabilistic terms — but these give rather different results depending on how exactly the idea is formulated. Other philosophers have attacked the idea that the two versions of the hypothesis are equivalent — but we then have to be very careful reformulating the laws of acceptable reasoning without throwing out too many of the tools of basic logic.
I’m not going to recommend any particular solution here, but chewing over these questions is an excellent way to start thinking critically about what you consider to be evidence for believing things…