tag:blogger.com,1999:blog-2222630007427380394.post2068709951434514834..comments2023-12-20T04:18:41.617-06:00Comments on The Hunting of the Snark: Reinhart, Rogoff, and McArdleSusan of Texashttp://www.blogger.com/profile/00076915322771385454noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-2222630007427380394.post-41618064603254630602013-10-05T08:04:10.963-05:002013-10-05T08:04:10.963-05:00I had to follow the link, just for the "heret...I had to follow the link, just for the "heretofore to be known as" -- it's so rare to see one in the wild.<br /><br />Such a shiny big word; it's almost sad that it doesn't mean what she thinks it means.Ghost of Joe Liebling's Dognoreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-42583581953220738152013-05-31T09:23:28.793-05:002013-05-31T09:23:28.793-05:00Smut Clyde--I am guessing that her knowledge of Au...Smut Clyde--I am guessing that her knowledge of Australia comes mostly from Nevil Shute books, which would explain the WWII Australian slang. <br /><br />McArdle is extremely parochial for someone who's a world traveler. Susan of Texashttps://www.blogger.com/profile/00076915322771385454noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-23095789157845353542013-05-30T11:00:21.421-05:002013-05-30T11:00:21.421-05:00Maybe a better way of saying it is p value represe...Maybe a better way of saying it is p value represents they confidence with which you can say that the null hypothesis (i.e. no correlation) is <i>not</i> true. So it is when the p value hits 0.5 it is a coin toss (50/50 true not true). At p= 0.05 you are correct that the null is not true 95% of the time. fishhttps://www.blogger.com/profile/01522672049371678717noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-20809276765700420292013-05-30T10:54:17.393-05:002013-05-30T10:54:17.393-05:00I not saying that the opposite of the tested condi...<i> I not saying that the opposite of the tested condition is true, just that if no statistically significant correlation is found at some confidence interval, than you can say that no correlation exists with that level of confidence. That's right, isn't it?</i><br /><br />No, it is not exactly right. You could have a p value of say 0.056, this would not traditionally be considered statistically significant, but it is still quite likely to be true (94.4%). The p=0.05 cutoff is a somewhat arbitrary one that is set so that there is some standard language that everyone can agree on. This kind of statistics is a continuum, that is why sample size is a critical component of any statistical argument. Large effects can be detected with small N (e.g. the fake data from R&R), but it takes much larger sets to prove small effects are real. But by that time, you are admitting that the effects are small (the opposite of what R&R want in this case) and probably other conditions are more important or there are hundreds of small effects that add up to the observed phenomenon...<br />fishhttps://www.blogger.com/profile/01522672049371678717noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-66740691911005931932013-05-30T08:45:53.526-05:002013-05-30T08:45:53.526-05:00Or McArdle is attempting to re-write reality to ag...<i>Or McArdle is attempting to re-write reality to agree with her personal opinions. </i><br /><br />I think that's the winning ticket.<br />~ifthethunderdontgetya™³²®©https://www.blogger.com/profile/06252371815131259831noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-76671085880412957832013-05-30T07:01:43.529-05:002013-05-30T07:01:43.529-05:00Crowding out?
Crowding out!!*&^$*&*
Yea...Crowding out? <br /><br />Crowding out!!*&^$*&*<br /><br />Yeah. Mid term AFR is 1% because nobody has money to lend.<br /><br />She isn't even trying.Downpuppyhttps://www.blogger.com/profile/10312490198813632190noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-78262027105673757482013-05-29T22:04:34.608-05:002013-05-29T22:04:34.608-05:00Statistical significance, like all empirical manif...Statistical significance, like all empirical manifestations of consensus reality, completely leaves out faith, stupidity, ignorance, and self-serving partisan deception.<br />Besides which, it hurts the authoritarian brain to have to think of things in terms of probabilities and potentialities and multiple outcomes based on multiple choices by multiple actors when everything is ultimately good/bad, right/wrong, based on tribal loyalty, and Daddy does all the real doing anyway.<br />McMegan naturally no like.bradhttps://www.blogger.com/profile/06907349163323395529noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-6593002389316226812013-05-29T19:09:30.214-05:002013-05-29T19:09:30.214-05:00Well more data changes stuff, sure I totally accep...Well more data changes stuff, sure I totally accept that. But on the point - once you determine that something is not statistically significant at p=x then it's x likely to be untrue. While stats defines the continuum of possibilities between true and untrue, it's binary in that there's no state for "neither". IOW, the probability that the hypothesis is true summed with the probability that it's not true is 100%.<br /><br />In the specific example, McArdle (and austerity junkies) believe that debt is correlated (actually they believe causation which is another argument) with low growth. There is no statistically significant correlation between debt and growth rate. Therefore the hypothesis is not likely to be true. If the p value for that test was the typical 2 sigma 19 times out of 20, then it is 95% likely (or wev the actual %age is for that test distribution) that there is no correlation between debt and growth.<br /><br />Unless there's another state. I mean, I not saying that the opposite of the tested condition is true, just that if no statistically significant correlation is found at some confidence interval, than you can say that no correlation exists with that level of confidence. That's right, isn't it?Dragon-King Wangchuckhttps://www.blogger.com/profile/15661002686346571531noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-37167755958508020442013-05-29T16:24:45.458-05:002013-05-29T16:24:45.458-05:00"Not statistically significant" means EX...<i>"Not statistically significant" means EXACTLY "unlikely to be true".</i><br /><br />Actually all it means is that you can't <i>rule out</i> the null (not true) hypothesis. You can have a not statistically significant result that can become significant if the sample size (n) becomes larger. The larger the n required to prove something, the smaller the effect size is, but it can still be real. fishhttps://www.blogger.com/profile/01522672049371678717noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-45864645214426668692013-05-29T15:29:24.221-05:002013-05-29T15:29:24.221-05:00"Bonza" is *not* an acceptable spelling ..."Bonza" is *not* an acceptable spelling of "bonzer". Urban Dictionary is wrong about this. Why MM would affect out-of-date Orstralian slang is a mystery.Smut Clydehttps://www.blogger.com/profile/09409476490132867809noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-85804950773675542042013-05-29T14:59:15.181-05:002013-05-29T14:59:15.181-05:00McArdle's numbers mean exactly what she wants ...McArdle's numbers mean exactly what she wants them to mean, no more, no less.<br /><br />More seriously, since she is certain the conclusion is right, the actual proof doesn't matter to her at all. It's the conservative mind at work, in which denial prevents them from acknowledging anything they don't like.Susan of Texashttps://www.blogger.com/profile/00076915322771385454noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-59133638742902113142013-05-29T14:53:55.846-05:002013-05-29T14:53:55.846-05:00"Not statistically significant" means EX..."Not statistically significant" means EXACTLY "unlikely to be true". In fact, the degree to which it is statistically insignificant quantifies the probability that the thing is not true.<br /><br />Is there a potential argument about p-values and sample sizes and wevs? No. That's covered by the word "unlikely". The liklihood is exactly described by the significance test. And sample sizes? The not statistically significant correlation was found using EXACTLY the same data that R&R used in the first place. If there was enough data to prove the hyhpothesis true, there is also enough data to prove it false - at exactly the same level of confidence.<br /><br />Unless, I guess McArdle uses a new kind of statistics where it's possible to reject the null hypothesis while still getting it to pay for dinner and a movie.Dragon-King Wangchuckhttps://www.blogger.com/profile/15661002686346571531noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-62560563243555496472013-05-29T14:22:38.824-05:002013-05-29T14:22:38.824-05:00See, just because R and R were wrong doesn't m...See, just because R and R were wrong doesn't mean that what they said wrong. It just means that if they were wrong, which they were, it doesn't matter. Susan of Texashttps://www.blogger.com/profile/00076915322771385454noreply@blogger.comtag:blogger.com,1999:blog-2222630007427380394.post-79213360165331620162013-05-29T14:20:11.153-05:002013-05-29T14:20:11.153-05:00But "not statistically significant" is n...<i>But "not statistically significant" is not the same as "unlikely to be true". </i><br /><br />WhatisthisIdon'teven.Dragon-King Wangchuckhttps://www.blogger.com/profile/15661002686346571531noreply@blogger.com