楼主: janyiyi
1005 1

Tour Guide Mementos [推广有奖]

  • 3关注
  • 17粉丝

讲师

27%

还不是VIP/贵宾

-

威望
0
论坛币
3206 个
通用积分
5056.6800
学术水平
539 点
热心指数
537 点
信用等级
538 点
经验
10157 点
帖子
300
精华
2
在线时间
90 小时
注册时间
2010-10-3
最后登录
2024-4-6

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

Tour guides in your travels jot down Mementos and Keepsakes from each Tour of my new book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (CUP 2018). Their scribblings, which may at times include details, at other times just a word or two, may be modified through the Tour, and in response to questions from travelers (so please check back). Since these are just mementos, they should not be seen as replacements for the more careful notions given in the journey (i.e., book) itself. Still, you’re apt to flesh out your notes in greater detail, so please share yours (along with errors you’re bound to spot), and we’ll create Meta-Mementos.

Excursion 1. Tour I: Beyond Probabilism and Performance

[backcolor=rgb(233, 233, 232) !important]

.


Notes from Section1.1 Severity Requirement: Bad Evidence, No Test (BENT)

1.1 Terms (quick looks, to be crystalized as we journey on)

  • epistemology: The general area of philosophy that deals with knowledge, evidence, inference, and rationality.
  • severity requirement. In its weakest form it supplies a minimal requirement for evidence:
    severity requirement (weak): One does not have evidence for a claim if little if anything has been done to rule out ways the claim may be false. If data x agree with a claim C but the method used is practically guaranteed to find such agreement, and had little or no capability of finding flaws with C even if they exist, then we have bad evidence, no test (BENT).
  • error probabilities of a method: probabilities it leads or would lead  to erroneous interpretations of data. (We will formalize this as we proceed.)

error statistical account: one that revolves around the control and assessment of a method’s error probabilities. An inference is qualified by the error probability of the method that led to it.

(This replaces common uses of “frequentist” which actually has many other connotations.)
error statistician: one who uses error statistical methods.

severe testers: a proper subset of error statisticians: those who use error probabilities to assess and control severity. (They may use them for other purposes as well.)

The severe tester also requires reporting what has been poorly probed and inseverely tested,
Error probabilities can, but don’t necessarily, provide assessments of the capability of methods to reveal or avoid mistaken interpretations of data. When they do, they may be used to assess how severely a claim passes a test.

  • methodology and meta-methodology: Methods we use to study statistical methods may be called our meta-methodology – it’s one level removed.

We can keep to testing language as part of the meta-language we use to talk about formal statistical methods, where the latter include estimation, exploration, prediction, and data analysis.

There’s a difference between finding H poorly tested by data x, and finding x renders H improbable – in any of the many senses the latter takes on.
H: Isaac knows calculus.
x: results of a coin flipping experiment

Even taking H to be true, data x has done nothing to probe the ways in which H might be false.

5. R.A. Fisher, against isolated statistically significant results (p.4).

[W]e need, not an isolated record, but a reliable method of procedure. In relation to the
test of significance, we may say that a phenomenon is experimentally demonstrable
when we know how to conduct an experiment which will rarely fail to give us
a statistically significant result. (Fisher 1935b/1947, p. 14)

[backcolor=rgb(233, 233, 232) !important]

.


Notes from section 1.2 of SIST: How to get beyond the stat wars

6. statistical philosophy (associated with a statistical methodology): core ideas that direct its principles, methods, and interpretations.
two main philosophies about the roles of probability in statistical inference : performance (in the long run) and probabilism.
(i) performance: probability functions to control and assess the relative frequency of erroneous inferences in some long run of applications of the method
(ii) probabilism: probability functions to assign degrees of belief, support, or plausibility to hypotheses. They may be non-comparative (a posterior probability) or comparative (a likelihood ratio or Bayes Factor)

Severe testing introduces a third:
(iii) probativism: probability functions to assess and control a methods’ capability of detecting mistaken inferences, i.e., the severity associated with inferences.
• Performance is a necessary but not a sufficient condition for probativeness.
• Just because an account is touted as having a long-run rationale, it does not mean it lacks a short run rationale, or even one relevant for the particular case at hand.

7. Severity strong (argument from coincidence):
We have evidence for a claim C just to the extent it survives a stringent scrutiny. If C passes a test that was highly capable of finding flaws or discrepancies from C, and yet no or few are found, then the passing result, x, is evidence for C.
lift-off vs drag down
(i) lift-off : an overall inference can be more reliable and precise than its premises individually.
(ii) drag-down: An overall inference is only as reliable/precise as is its weakest premise.

• Lift-off is associated with convergent arguments, drag-down with linked arguments.
• statistics is the science par excellence for demonstrating lift-off!

8. arguing from error: there is evidence an error is absent to the extent that a procedure with a high capability of signaling the error, if and only if it is present, nevertheless detects no error.

Bernouilli (coin tossing) model: we record success or failure, assume a fixed probability of success θ on each trial, and that trials are independent. (P-value in the case of the Lady Tasting tea, pp. 16-17).

Error probabilities can be readily invalidated due to how the data (and hypotheses!) are generated or selected for testing.

9. computed (or nominal) vs actual error probabilities: You may claim it’s very difficult to get such an impressive result due to chance, when in fact it’s very easy to do so, with selective reporting (e.g., your computed P-value can be small, but the actual P-value is high.)

Examples: Peirce and Dr. Playfair (a law is inferred even though half of the cases required Playfair to modify the formula after the fact. ) Texas marksman (shooting prowess inferred from shooting bullets into the side of a barn, and painting a bull’s eye around clusters of bullet holes); Pickrite stock portfolio (Pickrite’s effectiveness at stock picking is inferred based on selecting those on which the “method” did best)
• We appeal to the same statistical reasoning to show the problematic cases as to show genuine arguments from coincidence.
• A key role for statistical inference is to identify ways to spot egregious deceptions and create strong arguments from coincidence.

10. Auditing a P-value (one part) checking if the results due to selective reporting, cherry picking, trying and trying again, or any number of other similar ruses.
• Replicability isn’t enough: Example. observational studies on Hormone Replacement therapy (HRT) reproducibly showed benefits, but had little capacity to unearth biases due to “the healthy women’s syndrome.”

Souvenir A.[ii] Postcard to Send: the 4 fallacies from the opening of 1.1.
• We should oust mechanical, recipe-like uses of statistical methods long lampooned,
• But simple significance tests have their uses, and shouldn’t be ousted simply because some people are liable to violate Fisher’s warnings.
• They have the means by which to register formally the fallacies in the postcard list. (Failed statistical assumptions, selection effects alter a test’s error probing capacities).
• Don’t throw out the error control baby with the bad statistics bathwater.

10. severity requirement (weak): If data x agree with a claim C but the method was practically incapable of finding flaws with C even if they exist, then x is poor evidence for C.
severity (strong): If C passes a test that was highly capable of finding flaws or discrepancies from C, and yet no or few are found, then the passing result, x, is an indication of, or evidence for, C.


二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:word

已有 1 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
oliyiyi + 100 + 60 + 5 + 5 + 5 精彩帖子

总评分: 经验 + 100  论坛币 + 60  学术水平 + 5  热心指数 + 5  信用等级 + 5   查看全部评分

沙发
oliyiyi 发表于 2022-8-11 18:19:39 |只看作者 |坛友微信交流群
谢谢分享

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-25 12:00