4 Answers

  1. If the results of the same study differ among different researchers, then you need to understand where and what violations were and conduct research again, or recognize some research as more objective based on the results(and even so, this experiment should be repeated by other people)
    �This is ideally done in a scientific way. But science can be bribed, there can be deliberate manipulation of results for some selfish purposes. It turns out that one part of scientists defends scientific principles and fights against anti-scientific research, while the other part simply does these very anti-scientific research. Who is a scientist in this situation, and who is not, I think it is obvious.
    And of course, in addition to deliberately manipulating the results, all people are wrong, but in any case, colleagues will correct and double-check=)
    �Well, not every study we've heard about is recognized as a science.
    Total in essence and briefly: Explain errors and violations in the course of the study, or accuse of deliberate manipulation of the results.

  2. There is and cannot be any universal explanation for all cases of life in science. If a scientist finds that the results of some studies contradict others, he tries to understand why this happened. Most often, it turns out that one of the researchers made a mistake, did not take into account the side effects, or gave an incorrect interpretation of their data. But sometimes it indicates the existence of phenomena that we did not know about before.

  3. discrepancies in results are a common occurrence in science and technology. counter-challenges occur when using different rooms, methods, equipment, traditions, and skills. serious discrepancies can be obtained when assessing a phenomenon, for example, the presence or absence of a disease in a patient. accordingly, there is a problem of data coordination of different methods and techniques. coordination means correlating different results and explaining the discrepancy, rationalizing the discrepancy, and in fact fitting some data to others. explanations for these discrepancies are situational, meaning that there are local coordination practices that are specific to a given location and community.

    in some places (laboratories, open spaces, and hospitals), a newer, more complex, and more technologically advanced technique is considered more reliable (the gold standard); in other places, on the contrary, a simpler, older technique is considered more reliable and is used for decision-making. as one of the doctors said to me:”you see, medicine is always a consultation!”. The discrepancy can be attributed to equipment breakdowns, poor staff qualifications, or even poor communication skills of patients.

    you should also remember that as a rule, data from failed tests are not published, and when testing different versions are discarded in favor of a “stronger” version, which is strong, but not true. in this sense, the tests can be repeated for a long time until the results look satisfactory.

    scientists often turn such discrepancies in favor of science, offering, for example, Popper's explanation: mistakes and selection make science stronger, and in the end science will come to one correct decision. But in the perspective of Science and Technology Research (STS), there is no limit to the correct, or objective state of affairs around which there are erroneous or more/less correct explanations. there is a multiplicity of reality associated with the fact that an object is constructed in locality that is radically different from similar objects in other places. in order for them to somehow relate and the object looks like “one”, various coordination techniques are used

  4. With the understanding that this can be either the norm or a serious deviation from it. There is even such a thing as “meta – analysis” – when they collect the results of individual studies, compare them with each other, and if serious differences are identified, try to understand why they arose. It is bad when the result obtained by some researchers is then not reproduced by anyone else (of course, provided that other teams use equally high-quality equipment and materials, sufficient and representative samples of subjects, adequate statistical methods, etc.). Принципи Fundamental non-reproducibility may indicate errors in the methodology and conduct of research, or even deliberate falsifications. But professionally conducted research can also produce different results in different samples, at different times, and using different methods.

    For example, the American and analytical website Vox recently published an infographicshowing that “everything we eat, at the same time causes and prevents cancer” – in the sense that almost any product can be found studies showing a statistically significant effect on the probability of occurrence of malignant tumors in both the positive and negative side.

    The reasons for this are intuitive: the development of cancer is a long and complex process, which is influenced by many factors and which is probabilistic, not deterministic (i.e., nothing protects against cancer 100%). In addition, diagnostic methods are still imperfect. So these studies suffer from the inability to take into account absolutely all the differences between the subjects, as well as from” white noise”: random deviations that make it difficult to detect the true effect. It is pointless to compare their conclusions, for example, with Newton's laws, which affect much simpler objects and patterns. However, even from such “noisy” studies, you can make a general picture for yourself: with the naked eye, you can see that there are many more articles proving the positive role of wine or tomatoes in preventing cancer than defending the opposite conclusions. But for butter or beef, the situation is reversed.

    In addition, there is such a problem as”degrees of freedom of the researcher”. When statistical processing results, especially non-experimental ones (for example, data from international statistics), a scientist always has to make decisions: what external factors should be taken into account (“controlled”), whether there are so-called “outliers” in the sample, i.e. observations obtained with a violation of the methodology or simply having a disproportionately strong influence on the overall pattern. A group of researchers led by American psychologist Brian NosekI did the following experiment: 29 groups of scientists were given data on red cards in football matches with the characteristics of players and asked to determine whether football players with a skin color other than white are more often removed from the field compared to white players. The result is shown in the image below.

    When evaluating the results of statistical analysis, two parameters should be taken into account: the effect size and its statistical significance, which depends on both the effect size and the standard error of its measurement. If the 90% or 95% confidence interval of the measured effect includes �0, it is considered statistically insignificant. In the graph above, the gray circles represent minor effects, while the green circles represent significant effects. It can be seen that most groups found one or another effect, usually within 1.5 times the probability of removing non-white players from the field. However, a significant minority (9 out of 29) did not find a statistically significant effect (and one group even found an almost three-fold increase in probability, while not being statistically significant.)

    So it's best to just accept that real science is not a school problem book where the correct answer is always known, and look not at one study, but at the whole set of them.

Leave a Reply