Thursday, February 2, 2012

HNI Northern Ent

HNI Northern Ent is a Malaysia, Malaysia registered company (AS 0282513-V) registered address is: P.O. Box 64, Kuala Krai Post Office, 18000 Kuala Krai, Kelantan, Malaysia. We are a respected Malaysia company that you can place your trust and confidence in.

HNI Northern Ent was established in 2003 and soon became the leader in the Malaysia market for custom-written essays, assignments and dissertations. We are a company that puts our customers first every time, and we will go the extra mile to ensure our customers are happy and satisfied. We started as a research assistant and thesis writer to the numerous working professionals who were taking up their Master and PhD. Ever since we started in 2003; we have tremendously over the years. We have assisted students and professionals in all parts in the world wide.

Our customers always come first. We will listen to you and do our best to serve your requirements. We will ensure that we assign the best statistical services to complete your work, whether it’s an essay, small assignment or an extensive dissertation. Before we send a completed work to you – we will check that the writer has fully met your requirements and scan the paper for plagiarism to ensure that the work you receive is completely original. We have an established Money Back Plagiarism Guarantee, which means that if you detect plagiarism in our work we will fully refund your payment – no questions asked. This is, obviously, not the way we like to operate, and therefore we will do our best to ensure that the work you receive fully meets your requirements.

At present we provide the following services:
During these years we have established close relationships with our customers and we receive over 60 per cent of our business from returning customers. Our returning customers also enjoy a wide variety of discounts and benefits, which even include free papers. We have experienced already the most terror and difficult professors from universities world wide.

Whether you are returning customer or are looking to place your first order with us – you can be assured that you will be 100 per cent satisfied with your experience and with our services.

Our Clients
The company regularly conducts marketing research studies for clients in categories such as
Lectures
Local University
Membership organizations    
Municipalities and government agencies 
Non-profit organizations and associations 
Overseas University
Post-Graduate Students
Undergraduate Students

Research Capabilities
HNI Research Team offers the full range of SPSS Tutorial services

General instructions
Data Transformations
Statistical Procedures

Company Overview

HNI Northern Ent - changing the way Market Researchers analyze and present data
HNI Northern Ent is dedicated to providing powerful and intuitive software solutions for Market Researchers. Our mission is to advance and promote academic and professional excellence, continuous learning and personal development. As we strive to better understand and meet the needs of our customers, several underlying principles guide our work:
  • We believe software should be easy to learn and easy to use, making you more productive right from the start
  • We believe in letting the technology do the heavy lifting – like automatically applying the correct statistical tests and automatically generating new variables, reports, and analyses to help you focus on finding the story in the data
  • We know from experience that analysis is only half the battle – presenting your results to colleagues and clients is a key part of the Market Researcher’s job and we make it easier than ever before to share your results online and offline.
     
  • We know researchers use a variety of tools to get their jobs done, so we make it easy to move data to other applications such as PowerPoint®, Excel®, PDF, SPSS® and SAS®

Our Mission Statement
Our mission is to advance and promote academic and professional excellence, continuous learning and personal development. By delivering our services we aim to boost the academic success of our clients, and by keeping to our guarantees and promises we strive to remain the premier providers of custom - written papers/SPSS report services

Our Vision Statement
Our vision is three-fold: to help our customers, our writers and our employees:
  • Help our customers to excel in their studies or employment, to improve their knowledge and boost their academic and professional career.
  • Offer our researchers continued development with the offer of interesting and challenging work, enabling them to broaden their knowledge base, creating a loyalty and achieve their goals.
Services overview - Statistical Data Analysis
SPSS is one the leading statistical software in the world. It provides powerful data analysis tools, which are indispensable for professionals in many different areas. SPSS is one of the most widespread and effective kind of research nowadays.  To clarify it for you, here is a decoding of SPSS:  statistical package for the social science. To be more exact it is a computer program for statistical analysis. SPSS is required at many universities, and it is for most part the standard for data analysis in several fields. We offer professional assistance in several topics. We have assisted hundred of clients with their homework, dissertation, etc.

Custom SPSS report services
We are a company that offers custom SPSS report services and has enough zest to make an impact in the current SPSS market by providing each of our customers with original premium quality services within a very short period of time. Notable thing about our company is that we employ only professionals with years of hands-on experience and who are capable of providing top quality SPSS services in all areas of academia.

SPSS reports form HNI Northern Ent. Company
If you are facing a thesis or dissertation writing and you need to include an SPSS report and you don’t know how to go about it, don’t hesitate to contact our HNI Research Team Company and you will get necessary assistance, guaranteed. Let a team of skillful scholars do the work for you. Our primary focus is to provide totally authentic, custom and error free SPSS reports, so if you will turn to us, you will get exactly what you need.

Customer satisfaction
Our Company concentrates on providing complete customer satisfaction. Every one of our clients will get as many free revisions as necessary if his or her requirements are not met by our staff. So any time you need help, be sure to contact us and you will get necessary assistance with your SPSS data, guaranteed.

E-mail or fax your SPSS request for a free quote. We'll be back to you in hours with a free estimate. There's no obligation after that. Make sure to visit our blog. There you'll find study guides for sale with hundreds of solved problems on Quantitative Methods, Statistics, Optimization, etc. We have study guides, the most common formulas, etc.


Contact Information
With our help, you'll get that extra edge you need! Don't hesitate contact us and we'll provide you with professional SPSS help. Either for simple problems or for dissertation help with SPSS, we can assist you providing you an accurate and clear interpretation of those SPSS outputs. The information we provide will give you hints on how to learn SPSS
You may contact us via phone, mail, or e-mail at:
Abd Halim Ali, Manager
Phone (+60) 192527496
             (+60) 175660835
Face book Group:  hniresearchteam
                            SPSS Data Analysis Discussion

Monday, June 13, 2011

Conduct and Interpret the Chi-Square Test of Independence

The Chi-Square Test of Independence is also known as Pearson’s Chi-Square, Chi-Squared, or c².  c is the Greek letter Chi.  The Chi-Square Test has two major fields of application: 1) goodness of fit test and 2) test of independence.

Firstly, the Chi-Square Test can test whether the distribution of a variable in a sample approximates an assumed theoretical distribution (e.g., normal distribution, Beta).  [Please note that the Kolmogorov-Smirnoff test is another test for the goodness of fit.  The Kolmogorov-Smirnov test has a higher power, but can only be applied to continuous-level variables.]

Secondly, the Chi-Square Test can be used to test of independence between two variables.  That means that it tests whether one variable is independent from another one.  In other words, it tests whether or not a statistically significant relationship exists between a dependent and an independent variable.  When used as test of independence, the Chi-Square Test is applied to a contingency table, or cross tabulation (sometimes called crosstabs for short).

Typical questions answered with the Chi-Square Test of Independence are as follows:
  • Medicine - Are children more likely to get infected with virus A than adults?
  • Sociology - Is there a difference between the marital status of men and woman in their early 30s?
  • Management - Is customer segment A more likely to make an online purchase than segment B?
  • Economy - Do white-collar employees have a brighter economical outlook than blue-collar workers?
As we can see from these questions and the decision tree, the Chi-Square Test of Independence works with nominal scales for both the dependent and independent variables.  These example questions ask for answer choices on a nominal scale or a tick mark in a distinct category (e.g., male/female, infected/not infected, buy online/do not buy online).
In more academic terms, most quantities that are measured can be proven to have a distribution that approximates a Chi-Square distribution.  Pearson’s Chi Square Test of Independence is an approximate test.  This means that the assumptions for the distribution of a variable are only approximately Chi-Square.  This approximation improves with large sample sizes.  However, it poses a problem with small sample sizes, for which a typical cut-off point is a cell size below five expected occurrences.

Taking this into consideration, Fisher developed an exact test for contingency tables with small samples.  Exact tests do not approximate a theoretical distribution, as in this case Chi-Square distribution.  Fisher’s exact test calculates all needed information from the sample using a hypergeocontinuous-level distribution.

What does this mean? Because it is an exact test, a significance value p calculated with Fisher’s Exact Test will be correct; i.e., when ρ =0.01 the test (in the long run) will actually reject a true null hypothesis in 1% of all tests conducted.  For an approximate test such as Pearson’s Chi-Square Test of Independence this is only asymptotically the case.  Therefore the exact test has exactly the Type I Error (α-Error, false positives) it calculates as ρ-value.

When applied to a research problem, however, this difference might simply have a smaller impact on the results.  The rule of thumb is to use exact tests with sample sizes less than ten.  Also both Fisher’s exact test and Pearson’s Chi-Square Test of Independence can be easily calculated with statistical software such as SPSS.

The Chi-Square Test of Independence is the simplest test to prove a causal relationship between an independent and one or more dependent variables.  As the decision-tree for tests of independence shows, the Chi-Square Test can always be used.

Conduct and Interpret a Wilcoxon Sign Test

What is the Wilcoxon Sign Test?

The Wilcoxon Sign test is a statistical comparison of the average of two dependent samples.  The Wilcoxon sign test is a sibling of the t-tests.  It is, in fact, a non-paracontinuous-level alternative to the dependent samples t-test.  Thus the Wilcoxon signed rank test is used in similar situations as the Mann-Whitney U-test.  The main difference is that the Mann-Whitney U-test tests two independent samples, whereas the Wilcox sign test tests two dependent samples.

The Wilcoxon Sign test is a test of dependency.  All dependence tests assume that the variables in the analysis can be split into independent and dependent variables.  A dependence tests that compares the averages of an independent and a dependent variable assumes that differences in the average of the dependent variable are caused by the independent variable.  Sometimes the independent variable is also called factor because the factor splits the sample in two or more groups, also called factor steps.

Dependence tests analyze whether there is a significant difference between the factor levels.  The t-test family uses mean scores as the average to compare the differences, the Mann-Whitney U-test uses mean ranks as the average, and the Wilcoxon Sign test uses signed ranks.

Unlike the t-test and F-test the Wilcoxon sign test is a non-paracontinuous-level test.  That means that the test does not assume any properties regarding the distribution of the underlying variables in the analysis.  This makes the Wilcoxon sign test the analysis to conduct when analyzing variables of ordinal scale or variables that are not multivariate normal.
The Wilcoxon sign test is mathematically similar to conducting a Mann-Whitney U-test (which is sometimes also called Wilcoxon 2-sample t-test).  It is also similar to the basic principle of the dependent samples t-test, because just like the dependent samples t-test the Wilcoxon sign test, tests the difference of observations.

However, the Wilcoxon signed rank test pools all differences, ranks them and applies a negative sign to all the ranks where the difference between the two observations is negative.  This is called the signed rank.  The Wilcoxon signed rank test is a non-paracontinuous-level test, in contrast to the dependent samples t-tests.  Whereas the dependent samples t-test tests whether the average difference between two observations is 0, the Wilcoxon test tests whether the difference between two observations has a mean signed rank of 0.  Thus it is much more robust against outliers and heavy tail distributions.  Because the Wilcoxon sign test is a non-paracontinuous-level test it does not require a special distribution of the dependent variable in the analysis.  Therefore it is the best test to compare mean scores when the dependent variable is not normally distributed and at least of ordinal scale.

For the test of significance of Wilcoxon signed rank test it is assumed that with at least ten paired observations the distribution of the W-value approximates a normal distribution.  Thus we can normalize the empirical W-statistics and compare this to the tabulated z-ratio of the normal distribution to calculate the confidence level.

Conduct and Interpret a Mann-Whitney U-Test

What is the Mann-Whitney U-Test?
The Mann-Whitney or U-test, is a statistical comparison of the mean.  The U-test is a member of the bigger group of dependence tests.  Dependence tests assume that the variables in the analysis can be split into independent and dependent variables.  A dependence tests that compares the mean scores of an independent and a dependent variable assumes that differences in the mean score of the dependent variable are caused by the independent variable.  In most analyses the independent variable is also called factor, because the factor splits the sample in two or more groups, also called factor steps.

Other dependency tests that compare the mean scores of two or more groups are the F-test, ANOVA and the t-test family.  Unlike the t-test and F-test, the Mann-Whitney U-test is a non-paracontinuous-level test.  That means that the test does not assume any properties regarding the distribution of the underlying variables in the analysis.  This makes the Mann-Whitney U-test the analysis to use when analyzing variables of ordinal scale.  The Mann-Whitney U-test is also the mathematical basis for the H-test (also called Kruskal Wallis H), which is basically nothing more than a series of pairwise U-tests.

Because the test was initially designed in 1945 by Wilcoxon for two samples of the same size and in 1947 further developed by Mann and Whitney to cover different sample sizes the test is also called Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test, Wilcoxon–Mann–Whitney test, or Wilcoxon two-sample test.

The Mann-Whitney U-test is mathematically identical to conducting an independent sample t-test (also called 2-sample t-test) with ranked values.  This approach is similar to the step from Pearson’s bivariate correlation coefficient to Spearman’s rho.  The U-test, however, does apply a pooled ranking of all variables.

The U-test is a non-paracontinuous-level test, in contrast to the t-tests and the F-test; it does not compare mean scores but median scores of two samples.  Thus it is much more robust against outliers and heavy tail distributions.  Because the Mann-Whitney U-test is a non-paracontinuous-level test it does not require a special distribution of the dependent variable in the analysis.  Thus it is the best test to compare mean scores when the dependent variable is not normally distributed and at least of ordinal scale.

For the test of significance of the Mann-Whitney U-test it is assumed that with n > 80 or each of the two samples at least > 30 the distribution of the U-value from the sample approximates normal distribution.  The U-value calculated with the sample can be compared against the normal distribution to calculate the confidence level.

The goal of the test is to test for differences of the media that are caused by the independent variable.  Another interpretation of the test is to test if one sample stochastically dominates the other sample.  The U-value represents the number of times observations in one sample precede observations in the other sample in the ranking.  Which is that with the two samples X and Y the Prob(X>Y) > Prob(Y>X).  Sometimes it also can be found that the Mann-Whitney U-test tests whether the two samples are from the same population because they have the same distribution.  Other non-paracontinuous-level tests to compare the mean score are the Kolmogorov-Smirnov Z-test, and the Wilcoxon sign test.

Monday, August 16, 2010

Hypothesis testing in SPSS anaysis

Hypothesis testing was introduced by Ronald Fisher, Jerzy Neyman, Karl Pearson and Pearson’s son, Egon Pearson. Hypothesis testing is a statistical method that is used in making statistical decisions using experimental data. Hypothesis Testing is basically an assumption that we make about the population parameter.

Key terms and concepts:
Null hypothesis: Null hypothesis is a statistical hypothesis testing that assumes that the observation is due to a chance factor. In hypothesis testing, null hypothesis is denoted by; H0: μ1 = μ2, which shows that there is no difference between the two population means.

Alternative hypothesis: In hypothesis testing, alternative hypothesis, contrary to the null hypothesis, shows that observations are the result of a real effect.

Level of significance: In hypothesis testing, the level of significance refers to the degree of significance in which we accept or reject the hypothesis. In hypothesis testing, 100% accuracy is not possible for accepting or rejecting a hypothesis. So, we therefore select a level of significance that is usually 5%.

Type I error: In hypothesis testing, there are two types of errors. The first is type I error and the second is type II error. In Hypothesis testing, type I error occurs when we are rejecting the null hypothesis, but that hypothesis was true. In hypothesis testing, type I error is denoted by alpha. In Hypothesis testing, the normal curve that shows the critical region is called the alpha region.

Type II errors: In hypothesis testing, type II errors occur when we accept the null hypothesis but it is false. In hypothesis testing, type II errors are denoted by beta. In Hypothesis testing, the normal curve that shows the acceptance region is called the beta region.

Power: In Hypothesis testing, power is usually known as the probability of correctly accepting the null hypothesis. In hypothesis testing, 1-beta is called power of the analysis.

One-tailed test: In hypothesis testing, when the given statistical hypothesis is one value like H0: μ1 = μ2, it is called the one-tailed test.

Two-tailed test: In hypothesis testing, when the given statistics hypothesis assumes a less than or greater than value, it is called the two-tailed test.

Statistical decision for hypothesis testing:
In statistical analysis, we have to make decisions about the hypothesis. These decisions include deciding if we should accept the null hypothesis or if we should reject the null hypothesis. Every test in hypothesis testing produces the significance value for that particular test. In Hypothesis testing, if the significance value of the test is greater than the predetermined significance level, then we accept the null hypothesis. If the significance value is less than the predetermined value, then we should reject the null hypothesis. For example, in Hypothesis testing, if we want to see the degree of relationship between two stock prices and the significance value of the correlation coefficient is greater than the predetermined significance level, then we can accept the null hypothesis and conclude that there was no relationship between the two stock prices. However, due to the chance factor, it shows a relationship between the variables.

Significane in SPSS analysis

Significance is a statistical technique that is used to determine whether the sample drawn from a population is actually from the population or if by the chance factor, we have selected a wrong sample. For example, regression analysis can be used if we want to draw a conclusion about the parameter of regression as to whether or not there are any true relationships between a dependent and independent variable. Significance is then used to determine whether the relationship exists or not. For example, the regression coefficient is significant at 5% level. This means that we have a rejected null hypothesis and we are accepting an alternative hypothesis that the relationship exists between the dependent and independent variable. Significance at 5% shows that at minimum, out of hundred, at least 5% characteristics show that our decision is correct from that variable. If this is insignificant 5%, this means that the probability of that true relationship is less than 5%. In another example, if we take a sample from the population and want to draw conclusions about that sample at 5% significance level, we must figure out if the sample belongs to that population or if it is representing the characteristic of that population or not. We must do this so that we can use that sample for further analysis. Suppose that sample is insignificant as 5% significance level. This means that the sample is not representing the characteristics of that population. The probability of that sample is less than 5% that it belongs to that population, and this sample cannot give accurate results if we use this sample for analysis. Significance is used for the following test:

Parametric test: The Parametric test makes assumptions about distribution, particularly about normal distribution. When the parametric test meets assumptions, then the parametric test is more powerful than the non-parametric test. The following are common parametric tests:

Binomial one-sample test of significance of dichotomous distributions
T-test of the difference of means
Normal curve Z-tests of the differences of means and proportions

Key concepts and terms:
Significance and type one error: Significance shows that relationship in the data is found due to the chance factor. When we reject the null hypothesis which is true, or which should be accepted and we reject that hypothesis, this is called type one error.

Confidence limits: Confidence limit is basically upper and lower bound limits of significance on a normal curve. For a specified hypothesis, we assume that the significance range of hypothesis will move between this confidence range. If the calculated sample value moves within this range, than we can say that the hypothesis is insignificant. If the range moves outside this, then the hypothesis will be significant and rejected. For normally distributed data, confidence limits for a true population will always move with the mean, plus or minus 1.96 times the standard error.

Power or type two errors: When we accept a false hypothesis, it is called type two error. It also happens when we think that the relationship exists but there is no relationship. One minus beta (type two errors) is called power. Type two errors are more dangers than type one errors.

One-tailed vs. two-tailed tests: When we make assumptions about the hypothesis, that the hypothesis will be less than or greater than, it is said to be a two-tailed test. When we assume that the hypothesis is equal to some parameter, then it is said to be a one-tailed test.

Asymptotic vs. exact vs. Monte Carlo significance: Most significance tests are asymptotic which assume that sample size is adequate. When sample size is very small, then we use an exact test. An exact test is available in SPSS add on module. The Monte carlo test is used when the sample size is large.

Different between dependent and independent variable

What's a variable?

Answer: A variable is an object, event, idea, feeling, time period, or any other type of category you are trying to measure. There are two types of variables-independent and dependent.

What's an independent variable?

Answer: An independent variable is exactly what it sounds like. It is a variable that stands alone and isn't changed by the other variables you are trying to measure. For example, someone's age might be an independent variable. Other factors (such as what they eat, how much they go to school, how much television they watch) aren't going to change a person's age. In fact, when you are looking for some kind of relationship between variables you are trying to see if the independent variable causes some kind of change in the other variables, or dependent variables.

What's a dependent variable?

Answer: Just like an independent variable, a dependent variable is exactly what it sounds like. It is something that depends on other factors. For example, a test score could be a dependent variable because it could change depending on several factors such as how much you studied, how much sleep you got the night before you took the test, or even how hungry you were when you took it. Usually when you are looking for a relationship between two things you are trying to find out what makes the dependent variable change the way it does.

Many people have trouble remembering which is the independent variable and which is the dependent variable. An easy way to remember is to insert the names of the two variables you are using in this sentence in they way that makes the most sense. Then you can figure out which is the independent variable and which is the dependent variable:

(Independent variable) causes a change in (Dependent Variable) and it isn't possible that (Dependent Variable) could cause a change in (Independent Variable).

For example:

(Time Spent Studying) causes a change in (Test Score) and it isn't possible that (Test Score) could cause a change in (Time Spent Studying).

We see that "Time Spent Studying" must be the independent variable and "Test Score" must be the dependent variable because the sentence doesn't make sense the other way around.

Example 2 :


What is the dependent variable in the following function? Briefly explain your choice.

Y = 3X + Z - 5

Solution
The dependent variable is Y. This is because the value of Y depends on both the values of X and Z changing.