楼主: mark8865
15953 5

[问答] 请问什么是基线回归(baseline regression) [推广有奖]

  • 0关注
  • 0粉丝

硕士生

74%

还不是VIP/贵宾

-

威望
0
论坛币
6 个
通用积分
0.0001
学术水平
0 点
热心指数
0 点
信用等级
0 点
经验
6218 点
帖子
160
精华
0
在线时间
224 小时
注册时间
2013-5-15
最后登录
2023-1-23

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
请问什么是基线回归(baseline regression) 啊?求大神们解答!先谢谢啦!
二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:regression regressio Baseline regress Basel

沙发
mark8865 发表于 2014-7-28 14:23:38 |只看作者 |坛友微信交流群
有没有人解答呀?

使用道具

藤椅
天狼星TP 学生认证  发表于 2016-10-22 19:35:05 |只看作者 |坛友微信交流群
同问

使用道具

板凳
贝拉小可爱 发表于 2021-1-6 16:54:25 来自手机 |只看作者 |坛友微信交流群
同问啊

使用道具

报纸
薯条大王 发表于 2021-2-20 18:58:04 |只看作者 |坛友微信交流群
同问

使用道具

地板
willow 发表于 2023-3-21 18:59:03 |只看作者 |坛友微信交流群
To determine whether the predictor variable explains a significant amount of variability in the response variable, the simple linear regression model is compared to the baseline model. The fitted regression line in a baseline model is just a horizontal line across all values of the predictor variable. The slope of this line is 0, and the y-intercept is the sample mean of Y, which is Y-bar. In a baseline model, the X and Y variables are assumed to have no relationship. This means that for predicting values of the response variable, the mean of the response, Y-bar, does not depend on the values of the X variable. To determine whether a simple linear regression model is better than the baseline model, you compare the explained variability to the unexplained variability similarly to ANOVA. The explained variability is related to the difference between the regression line and the mean of the response variable. For each data point, you calculate this difference, which equals Y-hat-i minus Y-bar. To eliminate negative distances, you square each of these values. Then sum these squared values to obtain the model sum of squares, or SSM, which is the amount of variability that your model explains. The unexplained variability is the difference between the observed values and the regression line. For each data point, you calculate Y-i minus Y-hat-i and square the difference. Then sum these squared values to find the error sum of squares, or SSE, which is the amount of variability that your model fails to explain. The total variability is the difference between the observed values and the mean of the response variable. For each data point, you calculate Y-i minus Y-bar and square the difference. Then sum these squared values to get the corrected total sum of squares, or SST, which is, of course, the sum of the model and error sum of squares. The SSM and SSE are divided by their corresponding degrees of freedom to produce the mean-square model (MSM) and mean-square error (MSE). The significance of the regression analysis is assessed the same way as ANOVA, that is, by computing the F ratio, the mean squared model divided by the mean squared error, and the corresponding p-value. In fact, you'll see an ANOVA table in your regression output as well.

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加好友,备注cda
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-4-27 13:49