Donanemab - uh, oh





Hard to ignore the fact Lilly had gone essentially “radio silence” on donenamab lately?

Trying to avoid investor lawsuits?

https://stocks.apple.com/AZeU2ZeWMQTy5RZHs_UwKW

the study of donenamab that Lilly touts as a break through?
study had a P=.04. considered significant- anything less than .05 considered significant. the more zeros after the .0 the stronger the evidence. I realized the need for these sorts of diseases.
.04 should indicate more studies need be done to confirm efficacy.
medicare needs to approach softly and either demand more studies or have a very tight protocal for use before medicare writes a check. buyer beware.
 




Will they be selling tickets to the FDA Advisory Committee review of donanemab?

https://www.reuters.com/business/he...rug-decision-calls-advisory-panel-2024-03-08/

It seems the committee members should be asking a lot of questions, such as:

If you can only slow decline over 6 months, at the earliest phase of Alzheimers' disease (when the decline is barely perceptible), is it even worth the risk of brain swelling, bleeding and death?

After discontinuing treatment, is there a more rapid progression post-treatment; in other words, you've separated ever so slightly the cognition curves, but is the slope of the treated group now steeper once you've removed the amyloid plaque?

Whether Lilly "saw something" with longer duration treatment with donanemab in preliminary testing?​
 




Will they be selling tickets to the FDA Advisory Committee review of donanemab?

https://www.reuters.com/business/he...rug-decision-calls-advisory-panel-2024-03-08/

It seems the committee members should be asking a lot of questions, such as:

If you can only slow decline over 6 months, at the earliest phase of Alzheimers' disease (when the decline is barely perceptible), is it even worth the risk of brain swelling, bleeding and death?

After discontinuing treatment, is there a more rapid progression post-treatment; in other words, you've separated ever so slightly the cognition curves, but is the slope of the treated group now steeper once you've removed the amyloid plaque?

Whether Lilly "saw something" with longer duration treatment with donanemab in preliminary testing?​
Fingers crossed, but this could be bad news for the heir apparent to the CEO job.
 




Will they be selling tickets to the FDA Advisory Committee review of donanemab?

https://www.reuters.com/business/he...rug-decision-calls-advisory-panel-2024-03-08/

It seems the committee members should be asking a lot of questions, such as:

If you can only slow decline over 6 months, at the earliest phase of Alzheimers' disease (when the decline is barely perceptible), is it even worth the risk of brain swelling, bleeding and death?

After discontinuing treatment, is there a more rapid progression post-treatment; in other words, you've separated ever so slightly the cognition curves, but is the slope of the treated group now steeper once you've removed the amyloid plaque?

Whether Lilly "saw something" with longer duration treatment with donanemab in preliminary testing?​

want to know the real issue? Nursing homes rush to get patients into wheelchairs, so they can charge for that service monthly. This is how the hedge fun private equity people who control the Ivy League make billions (well, a grain of sand on the beach, ️ but they own the freaking beach, capiche?)

worst thing in the world for our grandparents. They should be gettingMORE not less exercise. That and diet are two very controllable risk factors
 








the study of donenamab that Lilly touts as a break through?
study had a P=.04. considered significant- anything less than .05 considered significant. the more zeros after the .0 the stronger the evidence. I realized the need for these sorts of diseases.
.04 should indicate more studies need be done to confirm efficacy.
medicare needs to approach softly and either demand more studies or have a very tight protocal for use before medicare writes a check. buyer beware.
99% of the people who work at Lilly have no clue what a p-value means, and that includes the research scientists. Heck, I'm not even sure some of the Ph.D. biot-stats gang could articulate what it means. Let's not forget that this indictment includes those folks who have those signs in their yards lecturing those driving by about "science." Those signs were really popular a few years ago, especially on the north side of Indy. They are not so prevalent now as those people have moved to Hamilton County seeking "safer neighborhoods" or the like. That is fascinating that people who want to embrace all humanity have sought to isolate themselves from the vast majority of it. It must be "statistically significant"
 




99% of the people who work at Lilly have no clue what a p-value means, and that includes the research scientists. Heck, I'm not even sure some of the Ph.D. biot-stats gang could articulate what it means. Let's not forget that this indictment includes those folks who have those signs in their yards lecturing those driving by about "science." Those signs were really popular a few years ago, especially on the north side of Indy. They are not so prevalent now as those people have moved to Hamilton County seeking "safer neighborhoods" or the like. That is fascinating that people who want to embrace all humanity have sought to isolate themselves from the vast majority of it. It must be "statistically significant"
Yup. This article from NIH highlights just modicum of issues. Good thing that the Pharma Industry helps funds the FDA. By the way, nobody reading this has any idea of what ANOVA is or what assumptions drive the ANOVA tests.

  1. The threshold value, P < 0.05 is arbitrary. As has been said earlier, it was the practice of Fisher to assign P the value of 0.05 as a measure of evidence against null effect. One can make the “significant test” more stringent by moving to 0.01 (1%) or less stringent moving the borderline to 0.10 (10%). Dichotomizing P values into “significant” and “non significant” one loses information the same way as demarcating laboratory finding into normal” and “abnormal”, one may ask what is the difference between a fasting blood glucose of 25mmol/L and 15mmol/L?
  2. Statistically significant (P < 0.05) findings are assumed to result from real treatment effects, ignoring the fact that 1 in 20 comparisons of effects in which the null hypothesis is true will result in significant findings (P < 0.05). This problem is more serious when several tests of hypothesis involving several variables were carried out without using the appropriate statistical test, e.g., ANOVA instead of repeated t-test.
  3. Statistical significance results do not translate into clinical importance. A large study can detect a small, clinically unimportant finding.
  4. Chance is rarely the most important issue. Remember that when conducting research a questionnaire is usually administered to participants. This questionnaire in most instances collect large amount of information from several variables included in the questionnaire. The manner in which the questions where asked and manner they were answered are important sources of errors (systematic error) which are difficult to measure.
 




Yup. This article from NIH highlights just modicum of issues. Good thing that the Pharma Industry helps funds the FDA. By the way, nobody reading this has any idea of what ANOVA is or what assumptions drive the ANOVA tests.

  1. The threshold value, P < 0.05 is arbitrary. As has been said earlier, it was the practice of Fisher to assign P the value of 0.05 as a measure of evidence against null effect. One can make the “significant test” more stringent by moving to 0.01 (1%) or less stringent moving the borderline to 0.10 (10%). Dichotomizing P values into “significant” and “non significant” one loses information the same way as demarcating laboratory finding into normal” and “abnormal”, one may ask what is the difference between a fasting blood glucose of 25mmol/L and 15mmol/L?
  2. Statistically significant (P < 0.05) findings are assumed to result from real treatment effects, ignoring the fact that 1 in 20 comparisons of effects in which the null hypothesis is true will result in significant findings (P < 0.05). This problem is more serious when several tests of hypothesis involving several variables were carried out without using the appropriate statistical test, e.g., ANOVA instead of repeated t-test.
  3. Statistical significance results do not translate into clinical importance. A large study can detect a small, clinically unimportant finding.
  4. Chance is rarely the most important issue. Remember that when conducting research a questionnaire is usually administered to participants. This questionnaire in most instances collect large amount of information from several variables included in the questionnaire. The manner in which the questions where asked and manner they were answered are important sources of errors (systematic error) which are difficult to measure.

This could be a dangerous drug. Elderly individuals should consider the risks versus the rewards before deciding whether this treatment is or is not right for them. lilly and the fda see dollar signs. Physicians and PATIENTS should see warning signs. I would decline this particular treatment.
 




"I asked this question before, how long do we need to live? These people are in their 70s. Leave them alone. This drug could be dangerous, and elderly individuals should carefully consider the risks and rewards before deciding whether this treatment is right for them. It seems like Lilly and the FDA are more focused on making money than the potential harm to patients. Physicians and patients alike should pay attention to warning signs. Instead of pushing potentially risky treatments, we should be focusing on improving the overall health of our population. The life expectancy in this country is dropping, and we need to address that issue."
 












"Shall we move on to the next topic? Specifically, let's discuss whether statistical significance is truly significant." Do I need a P-value to tell me what I should already see or don't see?
 




"Shall we move on to the next topic? Specifically, let's discuss whether statistical significance is truly significant." Do I need a P-value to tell me what I should already see or don't see?

Which is more important P-value or sample size? I am suggesting that a large sample size without a P-value is more telling than a P-value of a small sample size.
 




Which is more important P-value or sample size? I am suggesting that a large sample size without a P-value is more telling than a P-value of a small sample size.

We can also go a different route... Is the population being sampled reflective of the population at risk? You're data manipulators. You don't need an advanced degree in mathematics to see the obvious.

Or,

I understand that you have a question about the importance of P-value and sample size in statistical analysis. You have suggested that a large sample size without a P-value may be more informative than a small sample size with a P-value. Alternatively, you are wondering if the population being sampled is truly representative of the population at risk. It is important to ensure that the data is not being manipulated, and one does not need an advanced degree in mathematics to recognize this.
 




We can also go a different route... Is the population being sampled reflective of the population at risk? You're data manipulators. You don't need an advanced degree in mathematics to see the obvious.

Or,

I understand that you have a question about the importance of P-value and sample size in statistical analysis. You have suggested that a large sample size without a P-value may be more informative than a small sample size with a P-value. Alternatively, you are wondering if the population being sampled is truly representative of the population at risk. It is important to ensure that the data is not being manipulated, and one does not need an advanced degree in mathematics to recognize this.
Well played!

Remember when we were all told that Prozac cured depression because it was an SSRI?

Absolute GARBAGE.

Now, go out there and put your rainbow sign in the yard, talking about how you believe in science. You believe in science that benefits YOUR idiotic BELIEFS. That, ladies, gentlemen, and chicks with d%&cks, is what is called hypocrisy.
 




Thanks Lilly. Maybe you should have focused on legalizing pot instead of turning white, suburban housewives into a bunch of idolized zombies with kids addled with autism and so called attention-deficit. If they overcame that and the lure of only fans, then they have become part of the self-centered idiots that we all hope can overcome the challenges we all face.

The Hidden Harm of Antidepressants | Scientific American
 




Well played!

Remember when we were all told that Prozac cured depression because it was an SSRI?

Absolute GARBAGE.

Now, go out there and put your rainbow sign in the yard, talking about how you believe in science. You believe in science that benefits YOUR idiotic BELIEFS. That, ladies, gentlemen, and chicks with d%&cks, is what is called hypocrisy.

It's week one of March.Madness, you set the perfect alley-oop.
 




Do you mean those rainbow signs? They were all over the Northeast in various versions. Here you are...

All love is love. Of course, if at a launch meeting. Other than that, if you need an excuse to explain away why you are a total wh%^re, than try another venue, in private please.

All Lives Matter. Does that include Saddam Hussein and the BBQ guy in Haiti?

Women's Rights are Human Rights Yes, we all know that here in western civilazation. Try that out in, say, Pakistan or Saudi Arabia and see how long before you are beheaded.

No Human is illegal. This is a classic strawman argument. It should say, "No Human is Guilty, including Jeffrey Dahmer (he was just hungry)."

Science is Real. Sadly, I have to refer to the previous posts on p-values. Science is a debate and a discussion, not some blind obedience to government bureaucrats.

Bottom line... Damn people are dumb.