How do we know the LHC results are robust?Where can we find LHC results talks online?How experimentalists put bounds on new physics at the LHC?If the LHC-calculated mass of the Higgs is wrong, how long will it take to determine this with confidence?How are the two proton beams at the LHC accelerated in opposite directions?How are the protons for collisions in the LHC made?How the LHC bump can be a mere coincidence?How does the LHC explore extra dimensions?Results of photon-photon collisions at LHC?What are the consequences of LHC results for supersymmetry?Big Data Handling at the LHC

Short story about space worker geeks who zone out by 'listening' to radiation from stars

Why is `const int& k = i; ++i; ` possible?

Can I use my Chinese passport to enter China after I acquired another citizenship?

Why is delta-v is the most useful quantity for planning space travel?

Is exact Kanji stroke length important?

What would happen if the UK refused to take part in EU Parliamentary elections?

Trouble understanding overseas colleagues

Your magic is very sketchy

is this a spam?

Everything Bob says is false. How does he get people to trust him?

Personal Teleportation as a Weapon

Is it okay / does it make sense for another player to join a running game of Munchkin?

Why does John Bercow say “unlock” after reading out the results of a vote?

when is out of tune ok?

(Bedrock Edition) Loading more than six chunks at once

How can I replace every global instance of "x[2]" with "x_2"

Go Pregnant or Go Home

How does residential electricity work?

Applicability of Single Responsibility Principle

Coordinate position not precise

Can a monster with multiattack use this ability if they are missing a limb?

Greatest common substring

Why "be dealt cards" rather than "be dealing cards"?

Time travel short story where a man arrives in the late 19th century in a time machine and then sends the machine back into the past



How do we know the LHC results are robust?


Where can we find LHC results talks online?How experimentalists put bounds on new physics at the LHC?If the LHC-calculated mass of the Higgs is wrong, how long will it take to determine this with confidence?How are the two proton beams at the LHC accelerated in opposite directions?How are the protons for collisions in the LHC made?How the LHC bump can be a mere coincidence?How does the LHC explore extra dimensions?Results of photon-photon collisions at LHC?What are the consequences of LHC results for supersymmetry?Big Data Handling at the LHC













5












$begingroup$


Nature article on reproducibility in science.



According to that article, a (surprisingly) large number of experiments aren't reproducible, or at least there have failed attempted reproductions. In one of the figures, it's said that 70% of scientists in physics & engineering have failed to reproduce someone else's results, and 50% have failed to reproduce their own.



Clearly, if something cannot be reproduced, its veracity is called into question. Also clearly, because there's only one particle accelerator with the power of the LHC in the world, we aren't able to independently reproduce LHC results; in fact naively one might expect there's a 50% chance that if we built another LHC, it will not reach the same results. How, then, do we know that the LHC results (such as the discovery of the Higgs boson) are robust? Or do we not know the LHC results are robust, and are effectively proceeding on faith that they are?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
    $endgroup$
    – Aaron Stevens
    1 hour ago










  • $begingroup$
    @AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
    $endgroup$
    – Allure
    1 hour ago






  • 2




    $begingroup$
    I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
    $endgroup$
    – Aaron Stevens
    1 hour ago











  • $begingroup$
    Related video
    $endgroup$
    – Aaron Stevens
    1 hour ago






  • 1




    $begingroup$
    For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
    $endgroup$
    – rob
    59 mins ago















5












$begingroup$


Nature article on reproducibility in science.



According to that article, a (surprisingly) large number of experiments aren't reproducible, or at least there have failed attempted reproductions. In one of the figures, it's said that 70% of scientists in physics & engineering have failed to reproduce someone else's results, and 50% have failed to reproduce their own.



Clearly, if something cannot be reproduced, its veracity is called into question. Also clearly, because there's only one particle accelerator with the power of the LHC in the world, we aren't able to independently reproduce LHC results; in fact naively one might expect there's a 50% chance that if we built another LHC, it will not reach the same results. How, then, do we know that the LHC results (such as the discovery of the Higgs boson) are robust? Or do we not know the LHC results are robust, and are effectively proceeding on faith that they are?










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
    $endgroup$
    – Aaron Stevens
    1 hour ago










  • $begingroup$
    @AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
    $endgroup$
    – Allure
    1 hour ago






  • 2




    $begingroup$
    I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
    $endgroup$
    – Aaron Stevens
    1 hour ago











  • $begingroup$
    Related video
    $endgroup$
    – Aaron Stevens
    1 hour ago






  • 1




    $begingroup$
    For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
    $endgroup$
    – rob
    59 mins ago













5












5








5


1



$begingroup$


Nature article on reproducibility in science.



According to that article, a (surprisingly) large number of experiments aren't reproducible, or at least there have failed attempted reproductions. In one of the figures, it's said that 70% of scientists in physics & engineering have failed to reproduce someone else's results, and 50% have failed to reproduce their own.



Clearly, if something cannot be reproduced, its veracity is called into question. Also clearly, because there's only one particle accelerator with the power of the LHC in the world, we aren't able to independently reproduce LHC results; in fact naively one might expect there's a 50% chance that if we built another LHC, it will not reach the same results. How, then, do we know that the LHC results (such as the discovery of the Higgs boson) are robust? Or do we not know the LHC results are robust, and are effectively proceeding on faith that they are?










share|cite|improve this question











$endgroup$




Nature article on reproducibility in science.



According to that article, a (surprisingly) large number of experiments aren't reproducible, or at least there have failed attempted reproductions. In one of the figures, it's said that 70% of scientists in physics & engineering have failed to reproduce someone else's results, and 50% have failed to reproduce their own.



Clearly, if something cannot be reproduced, its veracity is called into question. Also clearly, because there's only one particle accelerator with the power of the LHC in the world, we aren't able to independently reproduce LHC results; in fact naively one might expect there's a 50% chance that if we built another LHC, it will not reach the same results. How, then, do we know that the LHC results (such as the discovery of the Higgs boson) are robust? Or do we not know the LHC results are robust, and are effectively proceeding on faith that they are?







particle-physics large-hadron-collider data-analysis






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 21 mins ago









Chair

4,40772241




4,40772241










asked 2 hours ago









AllureAllure

1,996722




1,996722







  • 1




    $begingroup$
    I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
    $endgroup$
    – Aaron Stevens
    1 hour ago










  • $begingroup$
    @AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
    $endgroup$
    – Allure
    1 hour ago






  • 2




    $begingroup$
    I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
    $endgroup$
    – Aaron Stevens
    1 hour ago











  • $begingroup$
    Related video
    $endgroup$
    – Aaron Stevens
    1 hour ago






  • 1




    $begingroup$
    For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
    $endgroup$
    – rob
    59 mins ago












  • 1




    $begingroup$
    I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
    $endgroup$
    – Aaron Stevens
    1 hour ago










  • $begingroup$
    @AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
    $endgroup$
    – Allure
    1 hour ago






  • 2




    $begingroup$
    I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
    $endgroup$
    – Aaron Stevens
    1 hour ago











  • $begingroup$
    Related video
    $endgroup$
    – Aaron Stevens
    1 hour ago






  • 1




    $begingroup$
    For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
    $endgroup$
    – rob
    59 mins ago







1




1




$begingroup$
I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
$endgroup$
– Aaron Stevens
1 hour ago




$begingroup$
I think it is worth pointing out that the LHC doesn't just do one particle collision and then say the experiment is completed. How much do you know about what goes into such experiments, how many times they are actually repeated, and then how the data is analyzed from there?
$endgroup$
– Aaron Stevens
1 hour ago












$begingroup$
@AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
$endgroup$
– Allure
1 hour ago




$begingroup$
@AaronStevens I know some of it, but am not an expert. I know the LHC crashes two protons into each other multiple times, and the results of each collision are expected to be different but have different probabilities. Many of the daughter particles are unstable and also expected to decay. The detector sees the "final" products when they reach the detector, and the analysis is supposed to infer based on these detected particles what the original particles are. Does that answer your question?
$endgroup$
– Allure
1 hour ago




2




2




$begingroup$
I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
$endgroup$
– Aaron Stevens
1 hour ago





$begingroup$
I was asking if you had looked into the efforts taken to make sure the results from the LHC are good results and not just mistakes. Also the LHC isn't the only particle collider in existence.
$endgroup$
– Aaron Stevens
1 hour ago













$begingroup$
Related video
$endgroup$
– Aaron Stevens
1 hour ago




$begingroup$
Related video
$endgroup$
– Aaron Stevens
1 hour ago




1




1




$begingroup$
For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
$endgroup$
– rob
59 mins ago




$begingroup$
For more about this important question than you may have bargained for, look for discussions about the "look-elsewhere effect" in the statistical analysis of the data from the mostly-independent ATLAS and CMS experiments at the LHC, especially in the context of their joint discovery of the Higgs particle.
$endgroup$
– rob
59 mins ago










3 Answers
3






active

oldest

votes


















5












$begingroup$

That's a really great question. The 'replication crisis' is that many effects in social sciences couldn't be reproduced. There are many factors leading to this phenomenon, including



  • Weak standards of evidence, e.g., $2sigma$ evidence required to demonstrate an effect

  • Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect.

  • Poor training in statistical methods.

I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease:



  • Particle physics typically requires a high-standard of evidence for discoveries ($5sigma$)

  • The results at the LHC are already replicated!

    • There are several detectors placed around the LHC ring. Two them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc).

    • The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson.

    • The Higgs boson was observed several datasets collected at the LHC


  • The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported.

  • The LHC teams are guided by statistical committees, hopefully ensuring good practice

  • The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences.

  • Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else).

  • All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$-hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences.

  • All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal

  • LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$-hacking

  • LHC analysis typically (though not always) report a global $p$-value, which has beeen corrected for multiple comparisons (the look-elsewhere effect).

If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease.






share|cite|improve this answer











$endgroup$




















    0












    $begingroup$

    In addition to innisfree's excellent list, there's another fundamental difference between modern physics experiments and human-based experiments: While the latter tend to be exploratory, physics experiments these days are primarily confirmatory.



    In particular, we have theories (sometimes competing theories) that model our idea of how physics works. These theories make specific predictions about the kinds of results we ought to see, and physics experiments are generally then built to discriminate between the various predictions, which are typically either of the form "this effect happens or doesn't" (jet quenching, dispersion in the speed of light due to quantized space), or "this variable has some value" (the mass of the Higgs boson). We use computer simulations to produce pictures of what the results would look like in the different cases and then match the experimental data with those models; nearly always, what we get matches one or the other of the suspected cases. In this way, experimental results in physics are rarely shocking.



    Occasionally, however, what we see is something really unexpected, such as the time OPERA seemed to have observed faster-than-light motion—or, for that matter, Rutherford's gold-foil experiment. In these cases, priority tends to go toward reproducing the effect if possible and explaining what's going on (which usually tends to be an error of some sort, such as the miswired cable in OPERA, but does sometimes reveal something totally new, which then tends to become the subject of intense research itself until the new effect is understood well enough to start making models of it again).






    share|cite|improve this answer








    New contributor




    chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.






    $endgroup$




















      0












      $begingroup$

      The paper seems to be a statistical analysis of opinions, and in no way is rigorous enough to raise a question about the LHC. It is statistics about undisclosed statistics.



      Here is a simpler example for statistics of failures: Take an Olympics athlete. How many failures before breaking the record? Is the record not broken because there may have been a thousand failures before breaking it?



      What about the hundreds of athletes who try to reproduce and get a better record? Should they not try?



      The statistics of failed experiments is similar: There is a goal (actually thousands of goals depending on the physics discipline), and a number of trials to reach the goal, though the olympics record analogy should not be taken too far, only to point out the difficulty of combining statistics from a large number of sets. In physics there may be wrong assumptions, blind alleys, logical errors... that may contribute to the failure of reproducibility. The confidence level from statistical and systematic errors are used to define the robustness of a measurement.



      We know the LHC results are robust because there are two major and many smaller experiments trying for the same goals. The reason there are two experiments is so that systematic errors in one will not give spurious results. We trust that the measurement statistics that give the end results are correct, as we trust for the record breaking run that the measured times and distances are correct.



      (And LHC is not an experiment. It is where experiments can be carried out depending on the efforts and ingenuity of researchers, it is the field where the Olympics takes place.)



      The robustness of scientific results depends on the specific experimental measurements, not on integrating over all disparate experiments ever made. Bad use of statistics. For statistics of statistics, i.e. the confidence level of the "failed experiments" have to be done rigorously and the paper is not doing that.



      Another way to look at it: If there were no failures , would the experiments mean anything? They would be predictable by pen and paper.






      share|cite|improve this answer











      $endgroup$








      • 3




        $begingroup$
        I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
        $endgroup$
        – Aaron Stevens
        1 hour ago











      • $begingroup$
        @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
        $endgroup$
        – anna v
        1 hour ago











      • $begingroup$
        @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
        $endgroup$
        – anna v
        1 hour ago










      • $begingroup$
        and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
        $endgroup$
        – anna v
        1 hour ago







      • 1




        $begingroup$
        You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
        $endgroup$
        – Aaron Stevens
        1 hour ago











      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "151"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f468891%2fhow-do-we-know-the-lhc-results-are-robust%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      5












      $begingroup$

      That's a really great question. The 'replication crisis' is that many effects in social sciences couldn't be reproduced. There are many factors leading to this phenomenon, including



      • Weak standards of evidence, e.g., $2sigma$ evidence required to demonstrate an effect

      • Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect.

      • Poor training in statistical methods.

      I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease:



      • Particle physics typically requires a high-standard of evidence for discoveries ($5sigma$)

      • The results at the LHC are already replicated!

        • There are several detectors placed around the LHC ring. Two them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc).

        • The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson.

        • The Higgs boson was observed several datasets collected at the LHC


      • The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported.

      • The LHC teams are guided by statistical committees, hopefully ensuring good practice

      • The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences.

      • Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else).

      • All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$-hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences.

      • All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal

      • LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$-hacking

      • LHC analysis typically (though not always) report a global $p$-value, which has beeen corrected for multiple comparisons (the look-elsewhere effect).

      If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease.






      share|cite|improve this answer











      $endgroup$

















        5












        $begingroup$

        That's a really great question. The 'replication crisis' is that many effects in social sciences couldn't be reproduced. There are many factors leading to this phenomenon, including



        • Weak standards of evidence, e.g., $2sigma$ evidence required to demonstrate an effect

        • Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect.

        • Poor training in statistical methods.

        I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease:



        • Particle physics typically requires a high-standard of evidence for discoveries ($5sigma$)

        • The results at the LHC are already replicated!

          • There are several detectors placed around the LHC ring. Two them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc).

          • The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson.

          • The Higgs boson was observed several datasets collected at the LHC


        • The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported.

        • The LHC teams are guided by statistical committees, hopefully ensuring good practice

        • The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences.

        • Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else).

        • All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$-hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences.

        • All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal

        • LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$-hacking

        • LHC analysis typically (though not always) report a global $p$-value, which has beeen corrected for multiple comparisons (the look-elsewhere effect).

        If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease.






        share|cite|improve this answer











        $endgroup$















          5












          5








          5





          $begingroup$

          That's a really great question. The 'replication crisis' is that many effects in social sciences couldn't be reproduced. There are many factors leading to this phenomenon, including



          • Weak standards of evidence, e.g., $2sigma$ evidence required to demonstrate an effect

          • Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect.

          • Poor training in statistical methods.

          I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease:



          • Particle physics typically requires a high-standard of evidence for discoveries ($5sigma$)

          • The results at the LHC are already replicated!

            • There are several detectors placed around the LHC ring. Two them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc).

            • The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson.

            • The Higgs boson was observed several datasets collected at the LHC


          • The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported.

          • The LHC teams are guided by statistical committees, hopefully ensuring good practice

          • The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences.

          • Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else).

          • All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$-hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences.

          • All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal

          • LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$-hacking

          • LHC analysis typically (though not always) report a global $p$-value, which has beeen corrected for multiple comparisons (the look-elsewhere effect).

          If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease.






          share|cite|improve this answer











          $endgroup$



          That's a really great question. The 'replication crisis' is that many effects in social sciences couldn't be reproduced. There are many factors leading to this phenomenon, including



          • Weak standards of evidence, e.g., $2sigma$ evidence required to demonstrate an effect

          • Researchers (subconsciously or otherwise) conducting bad scientific practice by selectively reporting and publishing significant results. E.g. considering many different effects until they find a significant effect or collecting data until they find a significant effect.

          • Poor training in statistical methods.

          I'm not entirely sure about the exact efforts that the LHC experiments are making to ensure that they don't suffer the same problems. But let me say some things that should at least put your mind at ease:



          • Particle physics typically requires a high-standard of evidence for discoveries ($5sigma$)

          • The results at the LHC are already replicated!

            • There are several detectors placed around the LHC ring. Two them, called ATLAS and CMS, are general purpose detectors for Standard Model and Beyond the Standard Model physics. Both of them found compelling evidence for the Higgs boson. They are in principle completely independent (though in practice staff switch experiments, experimentalists from each experiment presumably talk and socialize with each other etc, so possibly a very small dependence in analysis choices etc).

            • The Tevatron, a similar collider experiment in the USA operating at lower-energies, found direct evidence for the Higgs boson.

            • The Higgs boson was observed several datasets collected at the LHC


          • The LHC (typically) publishes findings regardless of their statistical significance, i.e., significant results are not selectively reported.

          • The LHC teams are guided by statistical committees, hopefully ensuring good practice

          • The LHC is in principle committed to open data, which means a lot of the data should at some point become public. This is one recommendation for helping the crisis in social sciences.

          • Typical training for experimentalists at the LHC includes basic statistics (although in my experience LHC experimentalits are still subject to the same traps and misinterpretations as everyone else).

          • All members (thousands) of the experimental teams are authors on the papers. The incentive for bad practices such as $p$-hacking is presumably slightly lowered, as you cannot 'discover' a new effect and publish it only under your own name, and have improved job/grant prospects. This incentive might be a factor in the replication crisis in social sciences.

          • All papers are subject to internal review (which I understand to be quite rigorous) as well as external review by a journal

          • LHC analyses are often (I'm not sure who plans or decides this) blinded. This means that the experimentalists cannot tweak the analyses depending on the result. They are 'blind' to the result, make their choices, then unblind it only at the end. This should help prevent $p$-hacking

          • LHC analysis typically (though not always) report a global $p$-value, which has beeen corrected for multiple comparisons (the look-elsewhere effect).

          If anything, there is a suspicion that the practices at the LHC might even result in the opposite of the 'replication crisis;' analyses that find effects that are somewhat significant might be examined and tweaked until they decrease.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 39 mins ago

























          answered 51 mins ago









          innisfreeinnisfree

          11.5k32961




          11.5k32961





















              0












              $begingroup$

              In addition to innisfree's excellent list, there's another fundamental difference between modern physics experiments and human-based experiments: While the latter tend to be exploratory, physics experiments these days are primarily confirmatory.



              In particular, we have theories (sometimes competing theories) that model our idea of how physics works. These theories make specific predictions about the kinds of results we ought to see, and physics experiments are generally then built to discriminate between the various predictions, which are typically either of the form "this effect happens or doesn't" (jet quenching, dispersion in the speed of light due to quantized space), or "this variable has some value" (the mass of the Higgs boson). We use computer simulations to produce pictures of what the results would look like in the different cases and then match the experimental data with those models; nearly always, what we get matches one or the other of the suspected cases. In this way, experimental results in physics are rarely shocking.



              Occasionally, however, what we see is something really unexpected, such as the time OPERA seemed to have observed faster-than-light motion—or, for that matter, Rutherford's gold-foil experiment. In these cases, priority tends to go toward reproducing the effect if possible and explaining what's going on (which usually tends to be an error of some sort, such as the miswired cable in OPERA, but does sometimes reveal something totally new, which then tends to become the subject of intense research itself until the new effect is understood well enough to start making models of it again).






              share|cite|improve this answer








              New contributor




              chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.






              $endgroup$

















                0












                $begingroup$

                In addition to innisfree's excellent list, there's another fundamental difference between modern physics experiments and human-based experiments: While the latter tend to be exploratory, physics experiments these days are primarily confirmatory.



                In particular, we have theories (sometimes competing theories) that model our idea of how physics works. These theories make specific predictions about the kinds of results we ought to see, and physics experiments are generally then built to discriminate between the various predictions, which are typically either of the form "this effect happens or doesn't" (jet quenching, dispersion in the speed of light due to quantized space), or "this variable has some value" (the mass of the Higgs boson). We use computer simulations to produce pictures of what the results would look like in the different cases and then match the experimental data with those models; nearly always, what we get matches one or the other of the suspected cases. In this way, experimental results in physics are rarely shocking.



                Occasionally, however, what we see is something really unexpected, such as the time OPERA seemed to have observed faster-than-light motion—or, for that matter, Rutherford's gold-foil experiment. In these cases, priority tends to go toward reproducing the effect if possible and explaining what's going on (which usually tends to be an error of some sort, such as the miswired cable in OPERA, but does sometimes reveal something totally new, which then tends to become the subject of intense research itself until the new effect is understood well enough to start making models of it again).






                share|cite|improve this answer








                New contributor




                chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                Check out our Code of Conduct.






                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  In addition to innisfree's excellent list, there's another fundamental difference between modern physics experiments and human-based experiments: While the latter tend to be exploratory, physics experiments these days are primarily confirmatory.



                  In particular, we have theories (sometimes competing theories) that model our idea of how physics works. These theories make specific predictions about the kinds of results we ought to see, and physics experiments are generally then built to discriminate between the various predictions, which are typically either of the form "this effect happens or doesn't" (jet quenching, dispersion in the speed of light due to quantized space), or "this variable has some value" (the mass of the Higgs boson). We use computer simulations to produce pictures of what the results would look like in the different cases and then match the experimental data with those models; nearly always, what we get matches one or the other of the suspected cases. In this way, experimental results in physics are rarely shocking.



                  Occasionally, however, what we see is something really unexpected, such as the time OPERA seemed to have observed faster-than-light motion—or, for that matter, Rutherford's gold-foil experiment. In these cases, priority tends to go toward reproducing the effect if possible and explaining what's going on (which usually tends to be an error of some sort, such as the miswired cable in OPERA, but does sometimes reveal something totally new, which then tends to become the subject of intense research itself until the new effect is understood well enough to start making models of it again).






                  share|cite|improve this answer








                  New contributor




                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  $endgroup$



                  In addition to innisfree's excellent list, there's another fundamental difference between modern physics experiments and human-based experiments: While the latter tend to be exploratory, physics experiments these days are primarily confirmatory.



                  In particular, we have theories (sometimes competing theories) that model our idea of how physics works. These theories make specific predictions about the kinds of results we ought to see, and physics experiments are generally then built to discriminate between the various predictions, which are typically either of the form "this effect happens or doesn't" (jet quenching, dispersion in the speed of light due to quantized space), or "this variable has some value" (the mass of the Higgs boson). We use computer simulations to produce pictures of what the results would look like in the different cases and then match the experimental data with those models; nearly always, what we get matches one or the other of the suspected cases. In this way, experimental results in physics are rarely shocking.



                  Occasionally, however, what we see is something really unexpected, such as the time OPERA seemed to have observed faster-than-light motion—or, for that matter, Rutherford's gold-foil experiment. In these cases, priority tends to go toward reproducing the effect if possible and explaining what's going on (which usually tends to be an error of some sort, such as the miswired cable in OPERA, but does sometimes reveal something totally new, which then tends to become the subject of intense research itself until the new effect is understood well enough to start making models of it again).







                  share|cite|improve this answer








                  New contributor




                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  share|cite|improve this answer



                  share|cite|improve this answer






                  New contributor




                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.









                  answered 20 mins ago









                  chrylischrylis

                  1012




                  1012




                  New contributor




                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





                  New contributor





                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.






                  chrylis is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                  Check out our Code of Conduct.





















                      0












                      $begingroup$

                      The paper seems to be a statistical analysis of opinions, and in no way is rigorous enough to raise a question about the LHC. It is statistics about undisclosed statistics.



                      Here is a simpler example for statistics of failures: Take an Olympics athlete. How many failures before breaking the record? Is the record not broken because there may have been a thousand failures before breaking it?



                      What about the hundreds of athletes who try to reproduce and get a better record? Should they not try?



                      The statistics of failed experiments is similar: There is a goal (actually thousands of goals depending on the physics discipline), and a number of trials to reach the goal, though the olympics record analogy should not be taken too far, only to point out the difficulty of combining statistics from a large number of sets. In physics there may be wrong assumptions, blind alleys, logical errors... that may contribute to the failure of reproducibility. The confidence level from statistical and systematic errors are used to define the robustness of a measurement.



                      We know the LHC results are robust because there are two major and many smaller experiments trying for the same goals. The reason there are two experiments is so that systematic errors in one will not give spurious results. We trust that the measurement statistics that give the end results are correct, as we trust for the record breaking run that the measured times and distances are correct.



                      (And LHC is not an experiment. It is where experiments can be carried out depending on the efforts and ingenuity of researchers, it is the field where the Olympics takes place.)



                      The robustness of scientific results depends on the specific experimental measurements, not on integrating over all disparate experiments ever made. Bad use of statistics. For statistics of statistics, i.e. the confidence level of the "failed experiments" have to be done rigorously and the paper is not doing that.



                      Another way to look at it: If there were no failures , would the experiments mean anything? They would be predictable by pen and paper.






                      share|cite|improve this answer











                      $endgroup$








                      • 3




                        $begingroup$
                        I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago











                      • $begingroup$
                        @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                        $endgroup$
                        – anna v
                        1 hour ago











                      • $begingroup$
                        @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                        $endgroup$
                        – anna v
                        1 hour ago










                      • $begingroup$
                        and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                        $endgroup$
                        – anna v
                        1 hour ago







                      • 1




                        $begingroup$
                        You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago
















                      0












                      $begingroup$

                      The paper seems to be a statistical analysis of opinions, and in no way is rigorous enough to raise a question about the LHC. It is statistics about undisclosed statistics.



                      Here is a simpler example for statistics of failures: Take an Olympics athlete. How many failures before breaking the record? Is the record not broken because there may have been a thousand failures before breaking it?



                      What about the hundreds of athletes who try to reproduce and get a better record? Should they not try?



                      The statistics of failed experiments is similar: There is a goal (actually thousands of goals depending on the physics discipline), and a number of trials to reach the goal, though the olympics record analogy should not be taken too far, only to point out the difficulty of combining statistics from a large number of sets. In physics there may be wrong assumptions, blind alleys, logical errors... that may contribute to the failure of reproducibility. The confidence level from statistical and systematic errors are used to define the robustness of a measurement.



                      We know the LHC results are robust because there are two major and many smaller experiments trying for the same goals. The reason there are two experiments is so that systematic errors in one will not give spurious results. We trust that the measurement statistics that give the end results are correct, as we trust for the record breaking run that the measured times and distances are correct.



                      (And LHC is not an experiment. It is where experiments can be carried out depending on the efforts and ingenuity of researchers, it is the field where the Olympics takes place.)



                      The robustness of scientific results depends on the specific experimental measurements, not on integrating over all disparate experiments ever made. Bad use of statistics. For statistics of statistics, i.e. the confidence level of the "failed experiments" have to be done rigorously and the paper is not doing that.



                      Another way to look at it: If there were no failures , would the experiments mean anything? They would be predictable by pen and paper.






                      share|cite|improve this answer











                      $endgroup$








                      • 3




                        $begingroup$
                        I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago











                      • $begingroup$
                        @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                        $endgroup$
                        – anna v
                        1 hour ago











                      • $begingroup$
                        @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                        $endgroup$
                        – anna v
                        1 hour ago










                      • $begingroup$
                        and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                        $endgroup$
                        – anna v
                        1 hour ago







                      • 1




                        $begingroup$
                        You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago














                      0












                      0








                      0





                      $begingroup$

                      The paper seems to be a statistical analysis of opinions, and in no way is rigorous enough to raise a question about the LHC. It is statistics about undisclosed statistics.



                      Here is a simpler example for statistics of failures: Take an Olympics athlete. How many failures before breaking the record? Is the record not broken because there may have been a thousand failures before breaking it?



                      What about the hundreds of athletes who try to reproduce and get a better record? Should they not try?



                      The statistics of failed experiments is similar: There is a goal (actually thousands of goals depending on the physics discipline), and a number of trials to reach the goal, though the olympics record analogy should not be taken too far, only to point out the difficulty of combining statistics from a large number of sets. In physics there may be wrong assumptions, blind alleys, logical errors... that may contribute to the failure of reproducibility. The confidence level from statistical and systematic errors are used to define the robustness of a measurement.



                      We know the LHC results are robust because there are two major and many smaller experiments trying for the same goals. The reason there are two experiments is so that systematic errors in one will not give spurious results. We trust that the measurement statistics that give the end results are correct, as we trust for the record breaking run that the measured times and distances are correct.



                      (And LHC is not an experiment. It is where experiments can be carried out depending on the efforts and ingenuity of researchers, it is the field where the Olympics takes place.)



                      The robustness of scientific results depends on the specific experimental measurements, not on integrating over all disparate experiments ever made. Bad use of statistics. For statistics of statistics, i.e. the confidence level of the "failed experiments" have to be done rigorously and the paper is not doing that.



                      Another way to look at it: If there were no failures , would the experiments mean anything? They would be predictable by pen and paper.






                      share|cite|improve this answer











                      $endgroup$



                      The paper seems to be a statistical analysis of opinions, and in no way is rigorous enough to raise a question about the LHC. It is statistics about undisclosed statistics.



                      Here is a simpler example for statistics of failures: Take an Olympics athlete. How many failures before breaking the record? Is the record not broken because there may have been a thousand failures before breaking it?



                      What about the hundreds of athletes who try to reproduce and get a better record? Should they not try?



                      The statistics of failed experiments is similar: There is a goal (actually thousands of goals depending on the physics discipline), and a number of trials to reach the goal, though the olympics record analogy should not be taken too far, only to point out the difficulty of combining statistics from a large number of sets. In physics there may be wrong assumptions, blind alleys, logical errors... that may contribute to the failure of reproducibility. The confidence level from statistical and systematic errors are used to define the robustness of a measurement.



                      We know the LHC results are robust because there are two major and many smaller experiments trying for the same goals. The reason there are two experiments is so that systematic errors in one will not give spurious results. We trust that the measurement statistics that give the end results are correct, as we trust for the record breaking run that the measured times and distances are correct.



                      (And LHC is not an experiment. It is where experiments can be carried out depending on the efforts and ingenuity of researchers, it is the field where the Olympics takes place.)



                      The robustness of scientific results depends on the specific experimental measurements, not on integrating over all disparate experiments ever made. Bad use of statistics. For statistics of statistics, i.e. the confidence level of the "failed experiments" have to be done rigorously and the paper is not doing that.



                      Another way to look at it: If there were no failures , would the experiments mean anything? They would be predictable by pen and paper.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited 1 min ago

























                      answered 1 hour ago









                      anna vanna v

                      161k8153451




                      161k8153451







                      • 3




                        $begingroup$
                        I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago











                      • $begingroup$
                        @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                        $endgroup$
                        – anna v
                        1 hour ago











                      • $begingroup$
                        @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                        $endgroup$
                        – anna v
                        1 hour ago










                      • $begingroup$
                        and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                        $endgroup$
                        – anna v
                        1 hour ago







                      • 1




                        $begingroup$
                        You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago













                      • 3




                        $begingroup$
                        I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago











                      • $begingroup$
                        @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                        $endgroup$
                        – anna v
                        1 hour ago











                      • $begingroup$
                        @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                        $endgroup$
                        – anna v
                        1 hour ago










                      • $begingroup$
                        and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                        $endgroup$
                        – anna v
                        1 hour ago







                      • 1




                        $begingroup$
                        You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                        $endgroup$
                        – Aaron Stevens
                        1 hour ago








                      3




                      3




                      $begingroup$
                      I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                      $endgroup$
                      – Aaron Stevens
                      1 hour ago





                      $begingroup$
                      I'm not sure I buy the Olympics analogy. Failed attempts at breaking a record isn't the same thing as a failed attempt to reproduce an experiment. It also sounds like you are saying we should just cherry pick what does work and ignore when it fails.
                      $endgroup$
                      – Aaron Stevens
                      1 hour ago













                      $begingroup$
                      @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                      $endgroup$
                      – anna v
                      1 hour ago





                      $begingroup$
                      @AaronStevens It is the same as failed atempts for another runner to reproduce the record. I will edit, thanks
                      $endgroup$
                      – anna v
                      1 hour ago













                      $begingroup$
                      @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                      $endgroup$
                      – anna v
                      1 hour ago




                      $begingroup$
                      @AaronStevens " cherry pick what does work" but is not that evolution in general? and "ignore when it fails" one learns from failure to design better experiments.
                      $endgroup$
                      – anna v
                      1 hour ago












                      $begingroup$
                      and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                      $endgroup$
                      – anna v
                      1 hour ago





                      $begingroup$
                      and should you always expect no errors both in instruments and logic from experimenters? Science started with trial and error and evolved..Failures fall on the side, if not reproducible.
                      $endgroup$
                      – anna v
                      1 hour ago





                      1




                      1




                      $begingroup$
                      You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                      $endgroup$
                      – Aaron Stevens
                      1 hour ago





                      $begingroup$
                      You're just making it sound like if just try hard enough we can reproduce anything (break the record) , but this is not always the case. Not being able to reproduce experiment A could mean that something was wrong with experiment A and that we shouldn't try to make too many predictions (or any at all) based on experiment A. We wouldn't want to keep chasing it (trying to break the record)
                      $endgroup$
                      – Aaron Stevens
                      1 hour ago


















                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Physics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f468891%2fhow-do-we-know-the-lhc-results-are-robust%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Möglingen Índice Localización Historia Demografía Referencias Enlaces externos Menú de navegación48°53′18″N 9°07′45″E / 48.888333333333, 9.129166666666748°53′18″N 9°07′45″E / 48.888333333333, 9.1291666666667Sitio web oficial Mapa de Möglingen«Gemeinden in Deutschland nach Fläche, Bevölkerung und Postleitzahl am 30.09.2016»Möglingen

                      Virtualbox - Configuration error: Querying “UUID” failed (VERR_CFGM_VALUE_NOT_FOUND)“VERR_SUPLIB_WORLD_WRITABLE” error when trying to installing OS in virtualboxVirtual Box Kernel errorFailed to open a seesion for the virtual machineFailed to open a session for the virtual machineUbuntu 14.04 LTS Virtualbox errorcan't use VM VirtualBoxusing virtualboxI can't run Linux-64 Bit on VirtualBoxUnable to insert the virtual optical disk (VBoxguestaddition) in virtual machine for ubuntu server in win 10VirtuaBox in Ubuntu 18.04 Issues with Win10.ISO Installation

                      Antonio De Lisio Carrera Referencias Menú de navegación«Caracas: evolución relacional multipleja»«Cuando los gobiernos subestiman a las localidades: L a Iniciativa para la Integración de la Infraestructura Regional Suramericana (IIRSA) en la frontera Colombo-Venezolana»«Maestría en Planificación Integral del Ambiente»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»«Conózcanos»«Caracas: evolución relacional multipleja»«La Metrópoli Caraqueña: Expansión Simplificadora o Articulación Diversificante»