Classical inference emphasizes the existence of "constant" parameters to drive the model. However, since the estimate of Beta is dependent on the sample, so it's a random variable; the concept of sampling distribution. Depends on the assumption used in the error term, e.g., iid normal, or using the Central limit Theorem, the estimate of beta has a nice distribution (normal), so we can easily derive a t-test. The expected Y, not Y, is also a random variable because it's a product of estimated beta and given X (assumed it's fixed in a controlled experiment, which is not usually).
|