Gaining market intelligence is a fundamental issue in developing managerial competence. Lietz (2010) asserts that market intelligence allows individuals in different levels of management to make effective decisions. One of the mechanisms of gaining market intelligence includes conducting market research in order to gather relevant market data. However, the reliability of the data collected is subject to the researchers’ competence in conducting the research. Therefore, it is vital for researchers to incorporate effectively and research design, which entails the procedures that guide the researcher whilst conducting a particular study.
Some of the key components of the research design include data collection procedures, data analysis, reporting, and interpretation of the research studies. Dornyei (2008, p. 53) emphasises that research designs ‘are useful because they help in guiding the methodological decisions that researchers must make during their studies and set the logic by which they make interpretations at the end of their studies.
During the research process, researchers should be concerned with improving the quality of their study, which influences its significance to the targeted stakeholders such as the government, non-governmental organisations, the public, and the business community. This goal can be achieved by ensuring that the data collected is credible and relevant. Subsequently, researchers should incorporate effective methods of data collection. Data collection entails the process of gathering data from the field. The two main sources of data that researchers can adopt in conducting their study include primary and secondary sources. The primary sources entail sourcing data from the natural setting while secondary sources involve collecting data from documented sources such as published reports, articles, and previous studies.
Different methods of data collection have been formulated in an effort to assist researchers in gathering the relevant data. Therefore, researchers can choose different data collection methods for disparate studies depending on different aspects. In addition, the method of data collection varies depending on whether the study being conducted is qualitative or quantitative. Some of the data collection methods used in gathering primary data include interviewing, observation, and questionnaires. One of the most effective data collection instruments used in collecting primary data is questionnaires, which entail a set of questions that are designed in line with the research objectives. Lietz (2010, p. 258) defines a questionnaire as ‘a document containing questions and other types of items designed to solicit information appropriate to analysis’.
The significance of questionnaires in collecting data from the field emanates from the view that they can be integrated into collecting data using various research techniques such as surveys and observation. Lietz (2010) argues that a survey entails a complex communication process, which is a product of interaction between the research participants and the researcher. Additionally, the effectiveness of the survey is influenced by the extent of communication and sharing amongst the research participants in order to create meaning. However, the effectiveness of questionnaires as a data collection instrument is dependent on how well they have been designed. This paper entails a critical analysis on how to incorporate the concept of questionnaires’ design and question types in order to improve the quality of a research study.
Questionnaires reliability and validity
Lewis and Slack (2007) define reliability as the degree to which observations, tests, questionnaires, and other measurement procedures lead to the generation of similar results on repeated trials. Alternatively, reliability refers to the extent to which a psychometric instrument is free from errors irrespective of the prevailing conditions or environment (test-retest reliability). Questionnaires are considered as measurement instruments and hence they must be reliable. The two main forms of reliability involved in conducting a study include the reliability within a scale and the test-retest reliability. On the other hand, Dornyei (2008, p. 110) defines validity to include ‘the extent to which a psychometric instrument measures what it has been designed to measure’.
Researchers can consider various types of validity in designing the questionnaires. Some of the most common forms of validity include construct validity, face validity, content validity, and criterion-related validity. Construct validity refers to the extent to which the data collection instrument assesses the intended research construct. Content validity is concerned with whether the instrument takes into account all the most important aspects of the study. Therefore, it is important for the researcher to ensure a lucid definition of the constructs. On the other hand, face validity is concerned with whether the research questions appear to measure the construct. In order to ensure the validity and reliability of questionnaires, the researcher should take into account a number of steps as evaluated below.
Understanding the research background
The first step entails examining the formulated research purpose, research objective, research questions, and hypothesis. The examination is aimed at determining the target research audience and their educational or readability background. Moreover, the sample selected and the study population are evaluated. The researcher should also develop a comprehensive understanding of formulated research problems by conducting an intensive literature review.
This step entails generating questions or statements to be used in the questionnaire. The questionnaires should be based on the literature and the theoretical framework adopted. Questionnaire conceptualisation also entails establishing a link amongst the predetermined research objective. At this stage, the researcher must establish the elements that the questionnaire intends to measure, for example, opinions, attitudes, knowledge, behaviour change, and perceptions. Subsequently, the researcher will be in a position to identify and define the major variables, which include the moderator, independent, and dependent variables.
Format and data analysis
After conceptualisation of the questionnaires, the researcher should focus on writing the research questions, selecting the most suitable scales of measurement, determining the questionnaire layout, order of questions, and format to be used. Additionally, it is paramount for the researcher to consider the proposed data analysis. Furthermore, the researcher should also establish the link between the selected scale of measurement and the suitability of the data analysis. For example, if the researcher intends to understand the degree of variation amongst the various research variables using the ANOVA technique in the data analysis stage, he or she must ensure that the nominal scale is measured using at least two or more levels. On the other hand, the dependent variable should be measured using a ratio or interval scale.
Establishing validity and validity
At this stage, the researcher will have already formulated a draft questionnaire. Habib and Magalhaes (2007) are of the opinion that validity entails the built-in or systematic error in a particular measurement. In order to establish validity, it is imperative for the researcher to involve a panel of experts, which should be comprised of professionals and researchers in the selected area of study. Moreover, a field test should be conducted. The field test should be conducted using subjects that will not be considered in the actual sample. The objective of the field test is to identify errors and make the necessary adjustments based on the findings of the field test and expert opinion.
Researchers should assess a number of questions in order to ensure that validity is implemented effectively. Some of these questions entail establishing whether the questionnaire integrates the intended measure and whether it represents the content. Furthermore, the researcher should assess whether the study has taken into account the right research population and sample. Habib and Magalhaes (2007, p. 1) contend that the researcher ‘should determine whether the questionnaire is comprehensive enough to collect all the information needed to address the purpose and goals of the study.
Addressing the above issues in addition to conducting readability tests increases the validity of research questionnaires. A readability test is an indicator that is used in evaluating the ease with which a particular document can be read and understood by the target audience (Kouame 2010). The readability tests are used in the process of establishing the level of language difficulty. After establishing validity, the researcher can use the research questionnaire in conducting a pilot test. Kouame (2010) emphasises that the researcher should not only establish the validity of the questionnaire, but also its reliability. The reliability of a questionnaire is subject to the level of validity. The reliability of a questionnaire can be determined by conducting a pilot test.
Lewis and Slack (2007) assert that pilot testing is an integral element in the construction of a questionnaire. This assertion arises from the view that it provides researchers with an important insight into the ease or difficulty associated with completing the research questionnaire. The quality of a research study may be affected adversely if the questionnaire used is developed poorly or contains significant errors. Thus, pilot testing enables the researcher to identify any unclear concepts. Lewis and Slack (2007) define pilot testing as the process of administering preliminary questionnaires to a selected group of typical respondents.
During the pilot testing, the researcher is not obligated to select the pilot testing participants randomly. Pilot testing is critical in making the research questionnaires user-friendly. Through pilot testing, the researcher is in a position to eliminate issues that might affect the reliability of the study. An example of such issues includes bias. Researchers can employ different types of pilot testing. Some of the most common methods of pilot testing are explained below.
According to organisation
Pilot tests can be conducted based on the nature of the organisation, which includes internal and external organisations. Kouame (2010) asserts that external pilot testing involves administering the formulated research questionnaire to a number of respondents selected from the field. These respondents are not considered in the actual study. On the other hand, Lewis and Slack (2007, p. 23) define internal pilot testing as ‘an internal pilot survey that considers the respondents in the pilot study as the first participants in the main study.
Depending on respondent participation
Lewis and Slack (2007, p.25) indicate that pilot surveys ‘can be categorised depending on the extent of participation, which can be either participatory or undeclared, where undeclared pilot survey entails administering the research questionnaires to a few respondents as if it were the real survey’. On the other hand, participatory pilot testing entails creating informed consent to a number of respondents, for example, 10% of the selected sample, by informing them of their inclusion in the pre-test phase. This type of pilot survey is used in gathering suggestions on how the research question can be improved. For example, the researcher may enquire the selected respondents on how easy or difficult it is to answer the questions.
Methods of pilot testing
Researchers can adopt different methods in conducting the pilot test. One of these methods is the test-retest method, which entails testing the selected sample twice. The test-retest method enables the researcher to establish the reliability of a study if the selected respondents score the same points at different points. Subsequently, the test-retest method should culminate to a high degree of correlation of the answers provided. However, one of the greatest limitations of this method is that its usability might be affected if the study focuses on evaluating aspects that are subject to change, for example, assessing the level of depression and anxiety amongst medical students.
The limitation of the test-retest method can be improved using other methods such as the alternate form and the split-half method. The alternate method entails developing and comparing two questionnaires. On the other hand, Lewis and Slack (2007, p. 36) add that the ‘split-half method is a random statistical technique of testing reliability’. The method is applied by organising the various research items into two main groups. Lewis and Slack (2007, p. 38) assert that a score ‘for each subject is then calculated based on each half of the scale and if a scale is very reliable we would expect a person’s score to be the same on one half of the scale as the other’. The data collected can be analysed using various data analysis software such as the Statistical Package for Social Science (SPSS).
Types of questionnaires
Questionnaires form the foundation of a particular survey. Therefore, they have been used extensively over the past years in conducting research studies, especially in the health care sector. The relevance of questionnaires arises from their ability to enable researchers to establish the relationship between the prevailing theoretical issues and the predominant environment. Researchers can adopt different types of questionnaires. Rattray and Jones (2005, p. 240) say that some of the most common questionnaires include ‘structured, unstructured, quasi-structured, where structured questionnaires refer to pre-coded and definite questions, which are designed in advance by the researcher’. These questionnaires are also referred to as close-ended questionnaires and they are used in initiating a formal inquiry.
Therefore, they are characterised by a pre-defined answer. Dawes (2008, p. 136) argues that the ‘researcher has to anticipate all possible answers with pre-coded responses’. The structured questionnaires are mainly used in gathering quantitative data. Rattray and Jones (2005, p. 143) posit that one of ‘the advantages of structured questionnaires is that they result in minimal discrepancies and they are easy to administer to all the research respondents irrespective of their level of education. Moreover, structured questionnaires increase the level of consistency with which the researcher conducts and simplify the data collected using codes.
The unstructured questionnaires are also known as open-ended questionnaires. These questionnaires provide the research moderator with an opportunity to elaborate questions asked to the respondent in order to create sense. Subsequently, one can argue that unstructured questionnaires entail a guided conversation. Alternatively, Rattray and Jones (2005) contend that unstructured questionnaires can be defined as a topic guide. Therefore, the researcher ensures that the questionnaires are not rigid, which provides the enumerator with the discretion to construct new questions during the interviewing process in order to gather sufficient data. The unstructured questionnaires are mainly used in collecting data during focus group discussions.
Using open-ended questions increases the probability of gathering voluminous data as the researcher has an opportunity to ask additional questions during the interviewing process. Consequently, unstructured questionnaires enrich a particular research study. However, one of the major limitations of the open-ended questionnaires is that they may discourage receiving responses from some of the less literate respondents. Moreover, the interpretation of the data collected using an open-ended questionnaire can be complex due to difficulties in interpreting the responses.
The quasi-structured questionnaires are also referred to as semi-structured questionnaires as they are comprised of both open-ended and close-ended questions. Habib and Magalhaes (2007) assert that quasi-structured questionnaires are mainly used in conducting business-to-business market researches, whereby the researcher might need to gain different responses. Moreover, they enable the researcher to gather both qualitative and quantitative market data.
Researchers should ensure that the questionnaires are designed effectively, which can be achieved through effective question encoding in order to enhance the communication process. Mortel (2008) argues that question encoding enables the researcher to fine-tune the research questions so that the respondents understand the language used irrespective of their level of education, gender, occupation, and age. Researchers should take into account a number of best practices to ensure that the questions are designed optimally. Some of these issues are evaluated herein.
Every question included in the questionnaire should have a distinct objective. The objective of the question asked to the respondents should align with the objectives of the research study.
Avoid combining questions
Researchers should ensure that only one question is asked at a time. Thus, the merging of questions should be avoided. The selected respondents may not fully answer the merged questions. Below is an example of a merged question.
When did you get married and how many days did your honeymoon last?
In an effort to enhance the rate of participation in the research study, it is vital for researchers to avoid asking questions that require the respondents to calculate. Mortel (2008, p. 111) warns that most respondents ‘hesitate to calculate while those who cannot do so provide wrong answers as a way of hiding their ignorance; furthermore, the respondents who have the capacity to calculate may provide wrong answers in their quest to prove their level of confidence with regard to calculations’.
Previous studies conducted on questionnaire design emphasise the importance of keeping the questions in the questionnaires as short as possible. The rate of response received from the respondents is correlated directly with the length of questions. Mortel (2008) is of the view that long questions tend to be complex, which increases the amount of time required to provide a response, thus minimising the possibility of receiving complete responses. Therefore, Mortel (2008) contends that the questions should be designed in such a way that they are of medium length.
As one of the most important data collection instruments, questionnaires should be designed for the target respondents to understand the questions asked. Subsequently, it is fundamental for the researcher to minimise grammatical complexities. Moreover, the researcher should ensure that the questions adopt the active voice, which can be achieved by using repeat nouns. Possessive forms should be avoided in the question designing process (Jankowicz 2011).
Eliminating social desirable responses
In the quest to improve the rate of response, it is imperative for the researcher to avoid poorly worded questions. Mortel (2008, p. 116) insists that using ‘difficult vocabularies can threaten the respondents’ rate of response’. Some respondents may feel humiliated and uneducated. Consequently, the respondents may end up providing answers that may reduce the relevance of the study such as ‘I do not know’.
Moreover, Mortel (2008) asserts that the wording used in the questionnaire may force the respondents to answer the questions in a particular direction that seems to be socially accepted. The threat of social desirability occurs if the research study is inclined towards socially sensitive questions. Socially desirable responses may also arise if the respondents fear providing answers that might identify or associate them with their personal life issues such as sexuality. Such questions may be regarded as embarrassing and should be avoided. Furthermore, the respondents may be forced to provide socially desirable answers because of their prestige or social status (Jankowicz 2011).
Social desirability has a negative impact on the cogency of the research questionnaire due to the subsistent preconceptions. Additionally, socially desirable answers reduce the accuracy of the data collected from the field because the respondents are not in a position to answer truthfully to the questions asked (Mortel 2008).
Lietz (2010) argues that researchers have a responsibility to eliminate socially desirable responses in their question designing process. One of the techniques that the researchers can adopt entails using the indirect questioning technique. This technique is based on the assumption that the respondents will provide their views on the issue under investigation by thinking that they are projecting their opinion on other individuals in society. One example of an indirect question includes
- What is your opinion regarding individuals indulging in prostitution?
The second strategy that researchers can adopt in their quest to eliminate socially desirable responses is to provide the respondents with socially desirable answers at the introductory phase of the research question such as ‘Do you know…’. This type of questioning enables the respondents to think about the questions asked and provide a response based on their knowledge. Mortel (2008) is of the opinion that using this type of questioning provides the respondents with discretion in responding to the question. Subsequently, they have the right to provide a ‘do not know’ response. Furthermore, socially desirable answers can be eliminated by ensuring that the questions asked are neutral.
A number of tools have been formulated in an effort to assist researchers to assess the extent of social desirability. Some of these instruments include the Balanced Inventory of Desirable Responding [BIRD] and the Marlowe-Crown Social Desirability Scale (Mortel 2008). However, these measures have not been used extensively, which underscores the importance of using effective wording in eliminating socially desirable responses.
Negatively worded questions
Researchers should avoid asking questions that are constructed negatively. This assertion arises from the view that they increase the amount of time that the respondent requires to process. Furthermore, the likelihood of respondents making mistakes in negatively worded questions is high. Lietz (2010) emphasises that negatively worded questions increase the level of confusion amongst respondents.
Specificity and simplicity
Questionnaires should be designed to minimise the respondents’ cognitive demands. This goal can be attained by integrating the elements of specificity and simplicity by breaking down the complex questions into simpler questions. Specificity can be achieved by providing illustrations on some of the complex issues. Furthermore, the researcher should ensure that the questionnaires are not indistinct by avoiding vague words such as ‘perhaps’, ‘probably’, and maybe.
Considering the view that the relevance of a study is affected by the rate of response, it is paramount for researchers to ensure that the questions asked are not ambiguous. Ambiguous questions are also referred to as double-barrelled questions, which provide answers to the questions. Lietz (2010) asserts that ambiguity occurs if the question asked contains two different concepts. For example, ambiguity may occur if a question contains two verbs. Bhandari and Wagner (2006, p. 32) are of the view that invalidity of ‘responses due to cognitive overload increases where recalls of events are involved that have not occurred in the immediate past. In such events, the respondents’ response is influenced by the significance of the event under consideration.
Adverbs of frequency
The clear wording of the research questions is fundamental in increasing the rate of response. Thus, it is vital for researchers to incorporate adverbs of frequency. However, the researcher should ensure that an effective numeric reference is adopted in a bid to increase the specificity of the research questions. Examples of adverbs of frequency include ‘more than’ and ‘less than. Moreover, effective response categories should be incorporated in order to ensure that the metric used is effective.
In order to obtain substantial and disparate answers from the field, it is imperative for the researcher to avoid questions, which push the participants to answer in a particular direction.
No shorter checklists
Researchers have the discretion to use open-ended or close-ended questions during the data collection process. However, the choice of responses [the response set] provided by the researcher should not be limited. However, they should include an extensive list of options for the respondent to select.
Questions categories and structure
Researchers can incorporate two main categories of questions in their study. These categories include general and specific questions. The general questions are aimed at understanding the respondents’ general opinions on certain issues. On the other hand, the specific questions are aimed at accessing a certain response. In their quest to increase the rate of response and to gather sufficient market data, it is imperative for researchers to ask general questions before the specific questions. However, Lietz (2010, p. 255) holds that the ‘most important aspects of the questionnaire should be included in the first half of the general questions’.
One of the major challenges faced by researchers during the data collection process using questionnaires arises from the failure of the selected responses to complete the questionnaires. Putting the most essential aspects in the general questions section will enhance the probability of gathering a substantial amount of data from non-finishers.
The questionnaires should start with the general questions before finishing with the specific ones. Lietz (2010, p. 256) asserts that specific questions ‘take a certain aspect out of the responses obtained from the general responses’. Researchers should ensure that the questionnaires move from the factual questions to the abstract questions. Moreover, the closed questions should be asked prior to the open questions.
In addition to the above issues, questions regarding the respondents’ demographics, for example, the respondents’ personal information such as age, level of education, occupation, marital status, and income should be positioned at the end of the questionnaire to minimise the development of negative feelings amongst the respondents. The personal information asked might influence the respondents’ participation in the research process. Some respondents may consider some of the personal information asked as an infringement of the right to confidentiality. Lietz (2010) emphasises that insisting on the respondents’ personal information makes a study coercive.
The objective of designing questionnaires is to gather data from the target population. Subsequently, it is imperative for the researcher to provide adequate space for the respondent to provide answers. The questionnaire should include clear headings and the numbering of the research questions. Furthermore, the questionnaires should be legible, which can be achieved by using 11 points as the minimum font.
Response and measurement scale
During the questionnaire designing process, it is vital for researchers to decide how they intend the selected respondents to answer the questions.
Various response options can be adopted. Some of these options include
- Do not know the option
- Floating option
- Opinion filtering
Researchers should make a decision on whether all the potential respondents, irrespective of their knowledge on the issue under investigation, should be considered in the study. The researchers should determine whether to filter the less informed respondents. The response scale might include the ‘do not know’ option depending on how the researchers intend to select the research respondents. For example, if the study entails volunteering, the researcher might be forced to incorporate the ‘do not know option’ during the initial stage. However, this option may be eliminated after the pilot testing stage. The less educated respondents tend to provide the ‘DK’ response as opposed to the relatively educated respondents.
The second response option entails opinion floating, whereby the researcher provides the respondents with a list of answers to choose from. On the other hand, opinion filtering entails asking questions to the respondents to determine their level of knowledge. An example of a filtering question includes ‘What is your opinion on climate change. Asking such questions increase the likelihood of filtering potential respondents who are less knowledgeable about the issue under investigation. However, one of the major limitations of opinion filtering is that it affects the extent to which the selected research sample is representative of the general population.
One of the widely used rules in conducting research studies is that the sample size must be representative. The filtering option also evaluates the concept of the respondents’ attitudes. Findings of previous studies on the respondents’ attitude show that respondents depend on their general attitude in evaluating questions involving unfamiliar content (Lietz 2010).
According to Hill, Brierley, and McDougall (2010) determining the rating or measurement scale is a fundamental element in the questionnaire development process. The objective of scaling is to quantify the responses. Scaling enables researchers to adopt the mixed research design, which entails a combination of qualitative and quantitative research designs. Moreover, scaling enables researchers to analyse the responses obtained from the open-ended questionnaires. Lietz (2010, p. 257) contend that scaling ‘is the arrangement of possible opinions of respondents in a coherent order of behaviour or attitudes, in which a person could judge him or her to be fit in certain standpoint’. Scaling can be categorised into three, viz. the Guttmann scaling, Thurstone scaling, and Likert scaling.
Thurstone scaling involves arranging the responses received into different categories based on a particular variable such as the level of income. Guttmann scaling entails cumulative scaling by establishing the relationship amongst the various opinions. On the other hand, Lietz (2010, p. 258) notes that the Likert scale, which involves a ‘psychometric scale that is constructed based on the questionnaires, is one of the most commonly used scales of measurement and it is summative in nature as it involves arranging the responses obtained from the extreme negative option to the extreme positive option’. An example of the Likert scale includes
- = fully agree
- =somewhat agree
- =neither agree nor disagree
- =somewhat disagree
- =fully disagree
Different point scales can be used in developing the Likert scale. Some of the common scales include the 5-point and 7-point scales. Dawes (2008) argues that the two scales can be used to enable the researcher to undertake comparisons. However, the 7-point scale is considered more reliable due to its capacity to provide the respondents with an opportunity to differentiate the responses compared to the 5-point scale. The choice of the measurement scale should be based on the nature of the study. Thus, short measurement scales, for example, the 5-point scale are mainly used in situations requiring absolute judgement from the respondents. On the other hand, longer scales such as the 7-point and 11-point scales are used in situations demanding the respondents to use absolute judgements.
In addition to the above issues, it is imperative for the researcher to incorporate adequate translation ease in the process of developing the rating scale. Krosnick and Presser (2010) assert that the length of the response and measurement scale used has an impact on the effectiveness with which the selected respondents map their attitudes on the provided response alternatives. If a respondent is characterised by an extremely negative or positive attitude towards a particular aspect, using a dichotomous scale such as ‘strongly agree’ and ‘strongly disagree’ might lead to reporting of the respondents’ attitude. Such a dichotomous scale lacks the middle ground, which limits its relevance to a neutral respondent.
Thus, its accuracy and reliability are affected adversely. On the other hand, using a trichotomous scale, which is comprised of the ‘like’,’ neutral’ and ‘dislike’ scales, may be biased towards respondents characterised by moderate attitude. Therefore, it is imperative for the response scale used to take into account the diverse attitudes of the respondents towards the research subject. The response scale used should be clear to all the respondents to enhance the validity and reliability of the research measurements (Krosnick & Presser 2010).
Impact of order and direction of the Likert scale
Numerous studies have been conducted in an effort to determine the different response options such as the recentness and the primacy effects and the impact of changing the frames of reference in a study. Lietz (2010) defines primary effect as the assumption that the selected research participants will incline towards the earlier alternatives provided in the Likert scale as opposed to the later alternatives.
On the other hand, the recentness effect is assumed to occur when the respondents are inclined towards the later alternatives only after hearing the alternatives. On the other hand, Lietz (2010) asserts that the occurrence of shifting the set frames of reference occurs if the respondents select the ‘most favourable’ option despite its earlier or later occurrence in the Likert scale. Findings of previous studies conducted show that the recentness effect occurs in a number of studies involving unusual topics. Additionally, the recentness effect also occurs in some studies involving long-winded questions. On the other hand, the primacy effect occurs if a study incorporates a study with numerous alternatives, for example, 16 alternatives.
Another issue that has been evaluated previously relates to the direction of the various response options in the Likert scale. The point of contention has been the impact of the position of the various alternatives such as the ‘strongly agree’ option and the ‘strongly disagree’ option on the respondents’ behaviour. A number of studies have documented arguments on whether the ‘strongly agree’ option should be positioned on the left-hand side of the Likert scale and the ‘strongly disagree’ alternative on the right-hand side.
Despite the above areas of contention, findings of a study conducted by Dawes (2008) show that the direction of the various response options does not have a significant impact on the study as long as the alternatives are assigned the corresponding numerical weights. For example, the ‘strongly agree’ response should have a higher weight such as 8 points as opposed to the ‘strongly disagree’ option, which can be assigned a value of 1 point. However, some critics are of the view that it is important to ensure that the options selected to communicate less socially desirable options should be positioned on the left-hand side in order to minimise the likelihood of respondents making a decision on the choice without assessing the various options provided in the Likert scale.
Questionnaires constitute an important component in process of conducting a research survey. The significance emanates from their ability to assist researchers in collecting data from the field upon which the researcher bases the research findings. In a bid to improve the quality and relevance of the research findings, the questionnaires used must be reliable and valid. The concepts of validity and reliability are determined by the effectiveness with which the questionnaires have been designed. Subsequently, it is imperative for researchers to take into account a number of steps in order to ensure validity and reliability.
These steps include understanding the research background, conceptualising the questionnaire, and determining the format and data analysis method. Lietz (2010, p. 260) posits that by adhering to the above steps, the researcher is in a position to determine whether the questionnaire has taken into account the various forms of ‘validity, which include construct validity, face validity, content validity, and criterion-related validity.
After formulating the questionnaire, it is essential for researchers to undertake pilot testing in order to assess the effectiveness of the questionnaires in gathering the relevant data. Researchers can adopt different types of pilot testing. Some of these methods, according to Krosnick and Presser (2010, p.118), include ‘testing account to organisation and testing according to the respondents’ participation’. Pilot testing provides the researcher with an opportunity to identify possible errors in the questionnaires. Thus, the researcher is in a position to make the necessary adjustments. The study also underscores the importance of integrating various pilot testing methods such as the split-half method and the test-rest method.
The analysis shows that the rate of response achieved in conducting a particular study has a significant impact on the credibility and relevance of a study. Therefore, it is important for the researcher to ensure that the questionnaires are designed effectively through the incorporation of various types of questionnaires. Some of the most types of questionnaires that researchers may adopt include open and close-ended questionnaires.
Researchers have an obligation to ensure that the questions used are designed effectively, which can be achieved by taking into account a number of issues. First, the questions should be grammatically correct in order to enhance the level of understanding amongst the respondents. Additionally, the questions should be clear, relatively short, and simple. Other aspects that must be taken into account in designing the questions include eliminating socially desirable responses, avoiding negative words, and eliminating short checklists. The study also shows that it is essential for researchers to ensure that an effective structure and layout are adopted.
The significance of incorporating effective response and measurement scales has been emphasised. One of the measures that can be taken into account includes the Likert scale. However, the scale should be of appropriate length in order to enhance transition ease and accommodate the variation of the respondents’ attitudes.
Bhandari, A & Wagner, T 2006, ‘Self-reported utilisation of health care services: measurement and accuracy’, Medical Care Research and Review, vol. 63, no. 2, pp. 217–135.
Dawes, J 2008, ‘Do data characteristics change according to the number of scale points used? An experiment using 5-point, 7-point and 10-point scales’, International Journal of Market Research, vol. 50, no. 1, pp. 61–77.
Dornyei, Z 2008, Questionnaires in second language research; construction, administration, Routledge, New York.
Habib, E & Magalhaes, L 2007, ‘Development of a questionnaire to detect typical behaviour in infants’, Brazilian Journal of Physical Therapy, vol. 11, no. 3, pp. 155-60.
Hill, N, Brierley, J & MacDougall, R 2010, How to measure customer satisfaction, Gower Publishers, Chicago.
Jankowicz, D 2011, Research methods for business and management, Edinburgh Business School, Scotland.
Kouame, J 2010, Using readability tests to improve the accuracy of evaluation documents intended for low-literate participants, Western Michigan University, Michigan.
Krosnick, J & Presser, S 2010, Question and questionnaire design, Emerald Group Publishing Limited, New York.
Lewis, M & Slack, N 2007, Operations management; critical perspective on business, Taylor & Francis, New York.
Lietz, P 2010, ‘Research into questionnaire design; a summary of the literature, International Journal of Market Research, vol. 52, no. 2, pp. 249-274.
Mortel, T 2008, Faking it; social desirability response bias in self-report research, Southern Cross University, New York.
Rattray, J & Jones, M 2005, ‘Essential elements of questionnaire design and development’, Journal of Clinical Nursing, vol. 3, no. 3, pp. 234-244.