Talk:Randomized controlled trial/Archive 1: Difference between revisions

From Citizendium
Jump to navigation Jump to search
imported>D. Matt Innis
(paste /draft version in archive 1)
 
imported>Peter Schmitt
(Archive box and subpages)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
#REDIRECT [[Talk:Randomized controlled trial/draft]]
{{subpages}} {{Archive box}}
== Recruitment? ==
== Recruitment? ==



Latest revision as of 04:45, 23 December 2011

This article has a Citable Version.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 

Recruitment?

I wonder if this needs some detail on ethical recruitment, certainly the level in the Declaration of Helsinki, and perhaps examples from 21CFR11 U.S. regulations. The role of an Institutional Review Board probably should be considered. Howard C. Berkowitz 00:38, 19 April 2009 (UTC)

I think some of this would help, I do not know this area as well though. Ok to add to later editions? Robert Badgett 01:38, 29 April 2009 (UTC)

Publication bias

Publication bias refers to the tendency that trials that show a positive significant effect are more likely to be published than those that show no effect or are inconclusive. The Merck affair is not a case of publication bias, but raises different but interesting issues. The core issue here is related to the predetermined schedule of the trial. For statistical rigor, it is important that the parameters of a trial, including the end date, be predetermined - it cannot be left open to the organisers when they choose to end the trial because of the natural temptation to stop the trial at a point when the results appear consistent with a hoped for outcome. The dispute in the Merck case arose because of adverse events that occurred after the prescheduled end date, and so were not included. The authors argued that to do so would invalidate the prospective statistical design; the objections were that the publication should have disclosed adverse events known to have occurred subsequently. The arguments though are complex

Except for this issue, I think the article is excellent and I'm happy to support approval.12:15, 28 April 2009 (UTC)

I saw Merck as a variation of publication bias in that Merck suppressed negative data. However, I see your point. I removed this statement and replaced it with your first sentence above - which is succinctly worded. ok? - Robert Badgett 01:31, 29 April 2009 (UTC)
Thanks, added my name to To approve. Well done RobertGareth Leng 08:25, 7 May 2009 (UTC)

Inappropriate use of RCTs

I realise the following pertains mostly (but not entirely) to a small minority of all randomised trials, but I still think these negatives need to be mentioned.

There are some things missing from the ethical issues, and also some limitations of RCTs that need to be discussed.

Randomized Controlled Trials are, by their very nature, known to give false positives at a predictable rate. This is exploited by unscrupulous organisations and individuals wishing to promote therapies that are ineffective. RCTs also can't give a definitive negative, so when an effect is not shown the study can simply call for further testing and larger studies.

This is a problem with drug companies, who can keep testing things until they pass, but it is even more of a problem with Spiritual, Complimentary, and Alternative Medicine (or SCAM for short). Scammers have been exploiting RCTs these days in an attempt to add legitimacy to scientifically impossible and otherwise unjustifiable therapies.

There is also an issue kind of like data-mining. If you test a therapy against 20 slightly different diseases, you are practically guaranteed that one of those 20 tests will be a false positive. This leads scammers to test a supposedly universal therapy that they support against everything they can think of, and then when they find one that tests positive they will loudly proclaim there universal therapy has been proven.

The reason RCTs have these problems is that there is no science, or understanding of the biological method of action, involved anywhere in the process. RCTs rely purely on random chance.

This also makes RCTs easy targets for outright fraud. In other methods of doing science fraud is impossible, since to be correct one has to be logical, and simple thought experiments can check any proposition. But as long as a fraudster knows enough statistics to generate mathematically consistent fake data, they can claim anything with nobody the wiser.

There is also the moral issue of testing things that have already been disproven by the laws of science billions of times over. Since tests on humans are dangerous, expensive, inconvenient, and technically illegal under war crimes legislation when there is no pressing need for them, some RCTs are highly unethical.

There's also the issue that "informed consent" in the case of scientific absurdities is never given, even though it is required. They can't say that the test has no benefits to anyone and is trying to prove something impossible and already disproven. And ethics boards never require them to say so.

Also RCTs have become kind of a religion for many people in medicine, which blinds people to scientific absurdities. For example the Cochrane Library of Systematic Reviews proudly states that magical potions made by diluting poisons until none of the original atoms remain, whilst magically shaking them, are effective against some diseases. And no amount of scientific reasoning with them will convince them to remove the insanity. Other websites and organisations also blindly follow the Cochrane Library and refuse to remove insanity regardless of evidence. Oh, and they don't understand that the word "evidence" doesn't just mean RCTs, and that their peers in other scientific fields have evidence, and even hard proof, without the need for RCTs.

The conclusions one can make from an RCT are also extremely limited. For that exact combination of elements you get either an almost certain "yes that precise combination works", or you get an "I don't know, we need more testing". Without looking at the science behind it, there is no possible way to generalise from the results. So if you want to make even the slightest change, you need to test everything all over again. That doesn't stop people from drawing, or proclaiming, unsupported broad conclusions from the tests though.

There's also the issues of what consitutes a placebo. Whilst it is quite simple to come up with a placebo when testing chemicals for their chemical properties, other things, like surgery, acupuncture, magical spells, etc. don't have clear established placebos and often use inappropriate ones. Studies of supposedly magical remedies can't use a regular placebo or they would be comparing placebos against placebos. For them a placebo would be something proven to be chemically identical, yet having undergone whatever "magical" process it is they are testing (ie change nothing but what you are testing), yet they never, ever, make the slightest attempt to prove that it is a real placebo.

And of course there is the issue that since the testing process makes use of the rules of science and mathematics, one has to assume those rules are true in order to test something. Thus the process can't be used to test the proposition that the laws of science and mathematics are all wrong (like homeopaths and other idiots are arguing) for "magical" treatments.

So more coverage of the drawbacks of RCTs and when they are inappropriate would be good. The article already makes some mention of the steps required before an RCT is appropriate, but it could be spelled out better.

Don't get me wrong, I love statistics, and I love Randomised Controlled Trials and the Cochrane Library and medical research. But it has both advantages and limitations that need to be spelled out. Carl Kenner 20:38, 10 May 2009 (UTC)

APPROVED Version 1.0

Congratulations to everyone who worked on this article -- it has now been approved! Hayford Peirce 18:49, 12 May 2009 (UTC)