Bring4th Forums
  • Login Register
    Login
    Username:
    Password:
  • Archive Home
  • Members
  • Team
  • Help
  • More
    • About Us
    • Library
    • L/L Research Store
User Links
  • Login Register
    Login
    Username:
    Password:

    Menu Home Today At a Glance Members CSC & Team Help
    Also visit... About Us Library Blog L/L Research Store Adept Biorhythms

    As of Friday, August 5th, 2022, the Bring4th forums on this page have been converted to a permanent read-only archive. If you would like to continue your journey with Bring4th, the new forums are now at https://discourse.bring4th.org.

    You are invited to enjoy many years worth of forum messages brought forth by our community of seekers. The site search feature remains available to discover topics of interest. (July 22, 2022) x

    Bring4th Bring4th Studies Science & Technology Scientific method: Statistical errors

    Thread: Scientific method: Statistical errors


    zenmaster (Offline)

    Member
    Posts: 5,541
    Threads: 132
    Joined: Jan 2009
    #1
    02-13-2014, 12:36 AM
    "P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume."

    http://www.nature.com/news/scientific-me...rs-1.14700

    “P values are not doing their job, because they can't,” says Stephen Ziliak, an economist at Roosevelt University in Chicago, Illinois, and a frequent critic of the way statistics are used."
    [+] The following 3 members thanked thanked zenmaster for this post:3 members thanked zenmaster for this post
      • reeay, xise, Patrick
    xise (Offline)

    Member
    Posts: 1,909
    Threads: 52
    Joined: Mar 2012
    #2
    02-13-2014, 05:40 PM (This post was last modified: 02-13-2014, 05:42 PM by xise.)
    At the bottom of that link there are other numerous articles that touch on the issues of reliability/reproducibility.

    http://www.nature.com/news/policy-nih-pl...ty-1.14586

    "The recent evidence showing the irreproducibility of significant numbers of biomedical-research publications demands immediate and substantive action. The NIH is firmly committed to making systematic changes that should reduce the frequency and severity of this problem — but success will come only with the full engagement of the entire biomedical-research enterprise."

    Looks like they're starting to tackle some of these issues (article is from January 2014).
    [+] The following 1 member thanked thanked xise for this post:1 member thanked xise for this post
      • Patrick
    zenmaster (Offline)

    Member
    Posts: 5,541
    Threads: 132
    Joined: Jan 2009
    #3
    02-13-2014, 06:59 PM
    That and the insightful comments at the bottom are a good read esp regarding rewarded behavior. Tricky state of affairs to improve without significant overhaul and will the overhaul introduce a crippling bureaucracy?
    [+] The following 1 member thanked thanked zenmaster for this post:1 member thanked zenmaster for this post
      • Patrick
    Guardian (Offline)

    Member
    Posts: 361
    Threads: 31
    Joined: Sep 2012
    #4
    02-14-2014, 02:46 AM
    This is excellent. There was a heart attack study called "Alpha Omega" that compared statins to Omega-3 oil.

    13% of statin users had heart attacks but only 9% of omega-3 users had heart attacks. But because the P value was 0.051 the publication was buried as "insignificant". Absolutely shocking.

      •
    xise (Offline)

    Member
    Posts: 1,909
    Threads: 52
    Joined: Mar 2012
    #5
    02-14-2014, 03:58 PM (This post was last modified: 02-14-2014, 04:45 PM by xise.)
    One of the incidents I referred to as a failure of science - the misapprehension of the safety of Vioxx - may be linked to Vioxx researchers throwing out data under institutional p-hacking type guidelines:

    http://bigthink.com/neurobonkers/the-sta...of-science

    "We then hear of a real case study concerning Merck’s Vioxx painkiller marketed in over eighty countries with a peak value of over two and a half billion. After a patient died of a heart attack it emerged in court proceedings that Merck had allegedly omitted from their research findings published in the Annals of Internal Medicine that five of the patients who participated in the clinical trial of Vioxx suffered heart attacks while participating in the trial while only one participant had a heart attack while taking the generic alternative naproxen. Most worryingly of all, this was technically a correct action to take due to the fact that the Annals of Internal Medicine has strict rules regarding statistical significance of findings."

    Man these issues seem to run deep.

      •
    Poet (Offline)

    Member
    Posts: 128
    Threads: 11
    Joined: Dec 2012
    #6
    02-14-2014, 04:57 PM
    For me, the scientific method is totally useless from an epistemological viewpoint - at least in social sciences. You cannot falsify a hypothesis about human behavior with historical data. All what is for instance done in modern psychology with statistics is just a giant working programme for scientists. They assume that a contingency of causes to effects exists (and also a contingency of non-causes to non-effects). This is a totally unrealistic assumption when it comes to human action.

      •
    xise (Offline)

    Member
    Posts: 1,909
    Threads: 52
    Joined: Mar 2012
    #7
    05-07-2014, 02:46 AM (This post was last modified: 05-07-2014, 02:57 AM by xise.)
    Another long but good article that goes in depth as to how 'p-hacking' works and how it's a systemic problem in much of published research:

    http://www.psmag.com/navigation/health-a...ior-78858/

    Quote:The entire field of biomedical research, for instance, was shaken recently when researchers at the pharmaceutical firm Amgen reported that, in search of new drugs, they had selected 53 promising basic-research papers from leading medical journals and attempted to reproduce the original findings with the same experiments. They failed approximately nine times out of 10.

    It's a shame that even peer reviewed publishing of studies yields scientific findings that are just flat-out incorrect because of statistical massaging. There seems to be a fundamental systemic issue with the way modern science is practiced due to flawed statistics. Unfortunately, it seems that this problem isn't yet widely known or acknowledged.
    [+] The following 2 members thanked thanked xise for this post:2 members thanked xise for this post
      • reeay, Parsons
    reeay Away

    Account Closed
    Posts: 2,392
    Threads: 42
    Joined: Oct 2012
    #8
    05-07-2014, 03:19 AM
    It is widely known and discussed, and statisticians make jokes out of it (yeah, sad). The buffer for this type of epic ethical fail is to replicate studies, which is kind of the norm these days. As the blurb quote said, reproducing original findings failed so we know something isn't 'right'. Of course if they are all p-hacking it would be quite a tragedy...
    [+] The following 1 member thanked thanked reeay for this post:1 member thanked reeay for this post
      • xise
    « Next Oldest | Next Newest »

    Users browsing this thread: 1 Guest(s)



    • View a Printable Version
    • Subscribe to this thread

    © Template Design by D&D - Powered by MyBB

    Connect with L/L Research on Social Media

    Linear Mode
    Threaded Mode