楼主: gracelucy
10699 41

[下载]贝叶斯统计入门,Wiley2004出版 [推广有奖]

  • 0关注
  • 1粉丝

讲师

37%

还不是VIP/贵宾

-

威望
1
论坛币
654711 个
通用积分
1433.5897
学术水平
1 点
热心指数
5 点
信用等级
1 点
经验
2429 点
帖子
348
精华
0
在线时间
84 小时
注册时间
2006-9-15
最后登录
2024-2-2

相似文件 换一批

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币

[UseMoney=20]

76406.pdf (3.64 MB, 需要: 20 个论坛币)


[/UseMoney]

Introduction to Bayesian Statistics (Hardcover)
by William M. Bolstad

Product Details

  • Hardcover: 376 pages
  • Publisher: Wiley-Interscience (April 26, 2004)
  • Language: English
  • ISBN: 0471270202
  • Amazon.com Sales Rank: #312,132 in Books

Editorial Reviews

Review
"I would recommend this book if you are interested in teaching an introductory in Bayesian statistics…" (The American Statistician, February 2006)

"…this book fills a gap for teaching elementary Bayesian statistics…it could easily serve as a self-learning text…" (Technometrics, May 2005)

[In a review comparing Bolstad with another book,] "I will keep both of these books on my shelf, but I expect that Bolstad will be the one most borrowed by my colleagues."(significance, December 2004)

"...does an excellent job of presenting Bayesian Statistics as a perfectly reasonable approach to elementary problems of statistics…I must heartily recommend this book…" (STATS: The Magazine for Students of Statistics, Fall 2004)

Book Description
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. In Bayesian statistics the rules of probability are used to make inferences about the parameter. Prior information about the parameter and sample information from the data are combined using Bayes theorem. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. This book uniquely covers the topics usually found in a typical introductory statistics book but from a Bayesian perspective.


49 of 50 people found the following review helpful:

I learned a great deal about how the Bayesians do statistics, May 24, 2004

Approximately ten years ago, I received my initial statistics instruction from Dr. Robert Hogg, one of the leading educators in the field. There were occasions in class when he referred to the Bayesians, calling them a group of statisticians who rely on separate "a priori" and "a posteriori" analyses. As was his style, he made several jokes about "a posteriori" data. The structure of the class was such that he could not spend a great deal of time on Bayesian statistics, but his brief comments have always remained in my mind.


Therefore, when I received this book I immediately decided that I would read it. From it, I learned that the Bayesian approach to statistics is valuable and more accurately reflects the way humans think about the world. There are two primary philosophical approaches to statistics, the frequentist and Bayesian, with the frequentist being that most widely covered in basic statistics classes. A frequentist statistician uses random samples to provide estimates for unknown parameters of populations.


The Bayesian approach considers the population parameters to be random variables. The process of determining the value of a parameter starts with a subjective prior distribution of the parameter before the data is analyzed. After the data is collected and organized, Bayes' theorem is then used to revise your beliefs about the values of the parameters.


The first sections deal with the basics of summarizing and displaying data; logic, probability and uncertainty. These sections are generally not different from frequentist statistics, so there is no distinction between the Bayesian and frequentist philosophies. The first real differences occur at the end of chapter 5, which covers logic, probability and uncertainty. This is the point where Bayes' theorem is introduced and the principles of prior and posterior probabilities. Chapter 5 describes discrete random variables, and again, this section is standard material on probability.


The true philosophy of Bayesian statistics appears in chapter 6, which covers Bayesian inference for discrete random variables. As a newcomer to this area, I read it with great interest and learned a great deal about how Bayesian operations are performed. The remaining sections deal with the processes of performing basic statistical operations using Bayesian methods. This includes:

* Bayesian inference for binomial proportion.
* Bayesian inference for normal mean.
* Bayesian inference for difference between means.
* Bayesian inference for simple linear regression.

There are also two chapters that compare the Bayesian and frequentist techniques. Chapter 9 compares the Bayesian and fequentist techniques for the inference for proportions and chapter 11 compares the techniques for the inference for means. Exercises are included at the end of each chapter and appendix F is devoted to the answers to odd-numbered exercises.
I learned an enormous amount about Bayesian methods from this book and I strongly recommend it if you are interested in learned how the Bayesians do things.

Features of the Text
In this text I have introduced Bayesianmethods using a step by step development from conditional probability. In Chapter 4, the universe of an experiment is set up with two dimensions, the horizontal dimension is observable, and the vertical dimension is unobservable. Unconditional probabilities are found for each point in the universe
using the multiplication rule and the prior probabilities of the unobservable events. Conditional probability is the probability on that part of the universe that occurred, the reduced universe. It is found by dividing the unconditional probability by their sum over all the possible unobservable events. Because of way the universe is organized,this summing is down the column in the reduced universe. The division scales them up so the conditional probabilities sum to one. This result known as Bayes’ theorem is the key to this course. In Chapter 6 this pattern is repeated with the Bayesian universe. The horizontal dimension is the sample space, the set of all possible values of the observable random variable. The vertical dimension is the parameter space, the set of all possible values of the unobservable parameter. The reduced universe is the vertical slice that we observed. The conditional probabilities given what we observed are the unconditional probabilities found by using the multiplication rule (prior × likelihood) divided by their sum over all possible parameter values. Again, this sum is taken down the column. The division rescales the probabilities so they sum to one. This gives Bayes’ theorem for a discrete parameter and a discrete observation. When the parameter is continuous, the rescaling is done by dividing the joint probability-probability density function at the observed value by its integral over all possible parameter values so it integrates to one. Again, the joint
probability-probability density function is found by the multiplication rule and at the observed value is (prior×likelihood). This is done for binomial observations and a continuous beta prior in Chapter 8. When the observation is also a continuous random variable, the conditional probability density is found by rescaling the joint probability density at the observed value by dividing by its integral over all possible parameter values. Again, the joint probability density is found by the multiplication rule and at the observed value is prior × likelihood. This is done for normal observations and a continuous normal prior in Chapter 10. All these cases follow the same general pattern.

Bayes’ theorem allows one to revise his/her belief about the parameter, given the data that occurred. There must be a prior belief to start from. One’s prior distribution gives the relative belief weights he/she has for the possible values of the parameters. How to choose ones prior is discussed in detail. Conjugate priors are found by matching first two moments with prior belief on location and spread. When the conjugate shape does not give satisfactory representation of prior belief, setting up a discrete prior and interpolating is suggested.

Details that I consider beyond the scope of this course are included as footnotes. There are many figures that illustrate the main ideas, and there are many fully worked out examples. I have included chapters comparing Bayesian methods with the corresponding frequentist methods. There are exercises at the end of each chapter, some with short answers. In the exercises, I only ask for the Bayesian methods to beused, because those are the methods I want the students to learn. There are computer exercises to be done in Minitab or R using the included macros. Some of these are small-scale Monte Carlo studies that demonstrate the efficiency of the Bayesian methods evaluated according to frequentist criteria.

[此贴子已经被作者于2006-12-7 17:02:34编辑过]

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:贝叶斯统计 Wiley LEY 贝叶斯 Unobservable 下载 统计 入门 贝叶斯

本帖被以下文库推荐

沙发
蓝色 发表于 2006-12-7 16:33:00 |只看作者 |坛友微信交流群

如果学个数学系就好了,看这些东西就简单了。

使用道具

藤椅
gracelucy 发表于 2006-12-7 21:40:00 |只看作者 |坛友微信交流群
早知道要学经济,之前就学数学了,没有数学,经济寸步难行啊

使用道具

板凳
eshanzi 发表于 2006-12-8 09:07:00 |只看作者 |坛友微信交流群
朋友,能否便宜一点?我希望看看,最近要用。eshanzi(a)163.com,要不你给我发一份,谢谢

使用道具

报纸
wzirong 发表于 2006-12-8 12:52:00 |只看作者 |坛友微信交流群

谢谢LZ

使用道具

地板
gracelucy 发表于 2006-12-9 00:02:00 |只看作者 |坛友微信交流群
不用谢

使用道具

7
constant 发表于 2006-12-10 16:40:00 |只看作者 |坛友微信交流群

谢谢!

使用道具

8
eshanzi 发表于 2006-12-11 15:04:00 |只看作者 |坛友微信交流群
等着楼主降价,^_^,钱不够

使用道具

9
eshanzi 发表于 2006-12-11 18:11:00 |只看作者 |坛友微信交流群

哎,还不降价,等得累

使用道具

10
abcnaive 发表于 2006-12-12 00:54:00 |只看作者 |坛友微信交流群
不怎么样嘛

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注jltj
拉您入交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-25 23:58