Monday, August 16, 2010

Significane in SPSS analysis

Significance is a statistical technique that is used to determine whether the sample drawn from a population is actually from the population or if by the chance factor, we have selected a wrong sample. For example, regression analysis can be used if we want to draw a conclusion about the parameter of regression as to whether or not there are any true relationships between a dependent and independent variable. Significance is then used to determine whether the relationship exists or not. For example, the regression coefficient is significant at 5% level. This means that we have a rejected null hypothesis and we are accepting an alternative hypothesis that the relationship exists between the dependent and independent variable. Significance at 5% shows that at minimum, out of hundred, at least 5% characteristics show that our decision is correct from that variable. If this is insignificant 5%, this means that the probability of that true relationship is less than 5%. In another example, if we take a sample from the population and want to draw conclusions about that sample at 5% significance level, we must figure out if the sample belongs to that population or if it is representing the characteristic of that population or not. We must do this so that we can use that sample for further analysis. Suppose that sample is insignificant as 5% significance level. This means that the sample is not representing the characteristics of that population. The probability of that sample is less than 5% that it belongs to that population, and this sample cannot give accurate results if we use this sample for analysis. Significance is used for the following test:

Parametric test: The Parametric test makes assumptions about distribution, particularly about normal distribution. When the parametric test meets assumptions, then the parametric test is more powerful than the non-parametric test. The following are common parametric tests:

Binomial one-sample test of significance of dichotomous distributions
T-test of the difference of means
Normal curve Z-tests of the differences of means and proportions

Key concepts and terms:
Significance and type one error: Significance shows that relationship in the data is found due to the chance factor. When we reject the null hypothesis which is true, or which should be accepted and we reject that hypothesis, this is called type one error.

Confidence limits: Confidence limit is basically upper and lower bound limits of significance on a normal curve. For a specified hypothesis, we assume that the significance range of hypothesis will move between this confidence range. If the calculated sample value moves within this range, than we can say that the hypothesis is insignificant. If the range moves outside this, then the hypothesis will be significant and rejected. For normally distributed data, confidence limits for a true population will always move with the mean, plus or minus 1.96 times the standard error.

Power or type two errors: When we accept a false hypothesis, it is called type two error. It also happens when we think that the relationship exists but there is no relationship. One minus beta (type two errors) is called power. Type two errors are more dangers than type one errors.

One-tailed vs. two-tailed tests: When we make assumptions about the hypothesis, that the hypothesis will be less than or greater than, it is said to be a two-tailed test. When we assume that the hypothesis is equal to some parameter, then it is said to be a one-tailed test.

Asymptotic vs. exact vs. Monte Carlo significance: Most significance tests are asymptotic which assume that sample size is adequate. When sample size is very small, then we use an exact test. An exact test is available in SPSS add on module. The Monte carlo test is used when the sample size is large.

No comments: