Top Notch Consulting for PhD Research and Journal Manuscript Publications

Category: Data Analysis

Why Pilot Questionnaires? Reliability and Validity Testing for PhD Research

There are two keys tests for a questionnaire: reliability and validity. A questionnaire is reliable if it provides a consistent distribution of responses from the same survey universe. The validity of the questionnaire is whether or not it is measuring what we want it to measure

Testing a questionnaire directly for reliability is very difficult. It can be administered twice to the same of test respondents to determine whether or not they give consistent correct answers.However,the time between the two interviews cannot usually be very long ,both because the respondent’s answer may in fact change over time and because, to be of value to the researcher, the results are usually required fairly quickly. The short period causes further problems in that respondents may have learnt from the first interview  and as a result may alter their responses in the second one .Conversely, they may realize that they are being asked the same questions deliberately try to be consistent with their answers. In testing for reliability, we are therefore often asking whether respondents understand the questions and can answer them meaningfully.

Testing a questionnaire for validity requires that we ask whether the questions posed adequately address the objectives of the study. This should include whether or not the manner in which answers are recorded is appropriate.

In addition, questionnaires should be tested to ensure that there are no errors in them. With time scales to produce questionnaires sometimes very tight, there is very often a real danger of errors.

Piloting questionnaires can be thus divided into three areas: reliability., validity, and error testing.

Reliability

  • Do the questions sound right? It is surprising how often a question looks acceptable when written on piece of a   paper but sounds false, stilted or simply silly when read out.it can be salutary experience for questionnaires elves writers to conduct the interview themselves .They should note how often they want to paraphrase a question that they have written to sound more natural,.
  • Do the interviewers understand the questions? Complicated wording in a question can make it incomprehensible even to the interviewers. If they cannot understand it there is a little chance that respondents will.
  • Do the respondents understand a question? It is easy for technical technology and jargon to creep into questions, so we need to ensure that it is eliminated.
  • Have we included any ambiguous questions, double barreled questions, loaded or leading questions?
  • Does the Interview retain the attention and interest of respondents throughout? If attention is lost before it wavers, then the quality of the data may be in doubt. Changes may be required in order to retain the respondent’s interest.
  • Can the interviewers or respondents understand the routing instructions in the questionnaire? Particularly with paper questionnaire, we should check the routing instructions can be understood by the interviewers, or if completion, by respondents
  • Does the interview flow properly? The questionnaire should be conducting a conversation with the respondent. A questionnaire that unfolds in a logical sequence, with a minimum of jumps between apparently unrelated topics, helps to achieve that.

Validity

  • Can respondents answer the questions? We must ensure that we should ask the questions that they are capable of providing answers.
  • Are response codes provided sufficient? Missing response codes can lead to answers being forced to fit into the codes provided, or to large numbers of ‘other’ answers.
  • Do the response codes provide the sufficient discrimination? If most respondents give the same answer, then the pre-codes provided may need to be reviewed to see how the discrimination can be improved, and if that cannot be achieved, queries should be raised regarding the value of including the question.
  • Do questions and responses answer the brief? We should by this time reasonably be certain that the questions we think we are asking meet the brief ,but we need to ensure that the answers which respondents give to those questions are the responses to the questions that we think we are asking.

Error Testing

  • Have mistakes been made? Despite all the procedures that most research companies have in place to check questionnaire before they go live, mistakes do occasionally still they get through. It is often the small mistakes that go unnoticed, but these may have a dramatic effect on the meaning of a question or on routing between questions. Imagine the effect of inadvertently omitting the word ‘not’ from a question.
  • Does the routing work? Although this should have been comprehensively checked, illogical routing sequences sometimes only become apparent with live interviews.
  • Does the technology work? If unusual or untried technology is being used perhaps as an interactive element or for displaying prompts this should be checked in the field. It may work perfectly well in the office, but fields conditions are sometimes different, and a hiatus in the interview caused by slow working or malfunctioning technology can lose respondents.

How long does the interview take? Most Surveys will be budgeted for the interview to take a certain length of time .The number of interviewers allocated to the project will be calculated partly  on the length  of the interview ,and they will be paid accordingly .Assumptions will also have been made about respondent cooperation based on time taken to complete the interview .The study can run into serious timing and budgetary difficulties , and maybe impossible to complete if the interview is longer than time allowed for. Being shorter than allowed for. Being shorter than allowed for does not usually present such problems but may lead to wasteful use of interviewer resources.