Concept of Reliability
Pause! Let me ask you a question, how reliable was the questionnaire you last distributed or how reliable were the various items/questions measuring their respective construct? Oh, maybe you are yet to distribute the questionnaires after the respondents have carefully filled them out, how reliable are those responses going be, what about the errors in their responses (low or high), are your measurements consistent?
The answers to these questions are not far-fetched. The reliability of all the items in your questionnaire or the items measuring a construct is the degree to which the set of items is consistent or produces a consistent result with what it was supposed to measure. In other words, we want to check the internal consistency of those measurement scales.
For instance, in our dataset, there are five items measuring perceived quality of information in Wikipedia, if these items (Qu1 – Qu5) are reliable, we will expect them to produce a consistent result. In this case, any unreliable item(s) is(are) removed from the analysis and hence not considered in future analysis. As a researcher, it is worth noting that a high level of reliability is indications of a good relationship between a construct and the items measuring it. That been said, a higher relationship implies the construct explains a larger portion of the variance in each item/question. This also means that your measurement has a very low measurement error be it for the whole questionnaire or each construct.
The Cronbach alpha (α) is the most common method used to measure reliability. It ranges from 0 to 1 with values closer to 1 being the most appropriate. However, it comes with a threshold of 0.70.