The capacity of the internet as a tool for research remains unknown. Such a case is the reason why social scientists have begun examining the web from a broader perspective of its relevance in research. For this case, there is a need that experts to give relevant answers to some of the most critical questions that concern the cost-effective methods of research as well as those, which are methodologically sound. For instance, there is the likelihood that the modes adopted for any research could be subject to biases because they have the potential of influencing the types of responses that people give. The latter statement is true especially with consideration that modes of survey control the types of responses through cognitive as well as normative mechanisms.
Grandcolas, Rettie, and Marusenko are among the social scientists who have explored the viability of the internet as a tool for research. For instance, they argue that web surveys are the best tools used for internet research (Grandcolas, Rettie & Marusenko, 2003). They considered that surveys done online are the best tools for market studies, especially in the US. The authors attempted to explain whether web survey biases result from the mode effect of the samples used. Their conclusion, which is the thesis of this work is that the variations realized relate to sample bias and not mode bias. The three authors also formulated a conclusion that ramifications concerning low-response modes utilized in the web surveys have more effects that remain unrecognized. Therefore, this work will review the extent to which the authors support their argument and give alternatives that would have improved their arguments especially with consideration to workplace diversity.
In this part of the paper, there is a developed literature analysis of the arguments advanced by Grandcolas, Rettie, and Marusenko concerning their conclusion mentioned in the introduction. First, the authors argue that online surveys are subject to less bias as opposed to the conventional pen and paper approaches. They consider the fact that a self-completed survey requires that respondents get motivation, reassurance, and confidence that will give them the willingness to participate in the survey. The authors compared online surveys to interviews and concluded that the latter approach has the potential of influencing responses given because of the social cues involved. In such respect, the authors concluded that online research methods are better because they have less normative biases as well as acquiescence. The absence of social cues online has the effect of creating friendlier and more open approaches from people who engage in the survey.
However, there is a need to consider that such an argument has a weakness of disregarding the fact that visual survey methods, such as the one on the internet are subject to influences from primary effects. For this case, the responses that people make online have some levels of influence from what the respondents might have experienced about the topic of study. It is difficult to measure the level of influence of primary effects on responses that people give in online surveys. Therefore, from such a perspective, there is a counterargument that online surveys are liable to the effect of primary influence. The authors were keen to note such an approach and gave only a light comment that the effect the scenario creates affects the responses given. For this case, the authors commented that online surveys may vary the questions used and eliminate biases. However, from such a perspective, there is a need to realize that the types of responses will remain the same if the survey holds its objectivity. Such a statement implies that the authors did not mention the fact that a survey has to retain its objective even while the types of questions administered may vary.
The validity of the data, which the authors utilized in the article gave a variation on the types of methods of data collection. For this case, they collected data on the use of questionnaires, which have the appeal of many people. The method was incorporated into the use of emails to collect data from respondents, which combined both the visual and the oral aspects of research. The sampling method also showed a diversity of the types of research methods that the survey employed, which is a better way of studying the segments of data within the population (Gatignon, 2013). The two methods produced a wide range of statistics for the analysis. The researchers carried out a range of analyses using data collected by the two methods and discovered variations in response distribution between the use of the web and paper-based approaches of the study.
Systematic comparisons of the averages, the variances, the kurtosis, skewness, and the mode gave results that indicated independence in data used. The relationships that the analysis established led to the conclusion that there was more independence between the variables explored in the study (Easterby-Smith, Thorpe & Jackson, 2012). The research indicated far-reaching effects of sample bias though it did not produce any evidence of bias in web administration modes. There was the discovery that samples generated from the web are non-representative, which excludes people who do not use the web. For instance, web samples tend to generate low rates of response, which are not possible to calculate because participation invitations are found on the web.
The authors also noted that the type of responses that web surveys generally have a wider range of characteristics than the conventional research methods. For this case, they noted that there are several pieces of literature, which compare the traditional and web-based surveys. For example, they considered that email and web-based surveys have the potential of analyzing and giving different types of results. The types of responses that the email and web-based practices gave have a variation in the types of answers, which means that they can give more viable data for analysis.
Arguments That Would Increase the Validity of Conclusions
There is a need to consider the weaknesses of web surveys, which are the basis for the recommendations. First, there is the fact that the argument ignored some of the workplace diversities (Easterby-Smith, Thorpe & Jackson, 2012). Access to the web is limited to a few people although there are efforts to popularize its use. Such a case means that the method has limited coverage, which is describable as coverage error. The latter case defines a mismatch that results between the target and frame population. There is also the concept that web-based research has a limited range of coverage in terms of respondents and cannot study all the spheres of life. With such a concept in mind, there is a need to consider that the workplace has people from diverse backgrounds, some of whom cannot get access to the internet and those who do not know the use of the same facility. In this case, the researchers should consider using similar demographic characteristics within the population, which means that the survey should be discriminative of the demographics. The mere acceptance of the fact that a web-based survey has such a weakness means that surveyors should categorize the range for which their results hold. The results could be more valid if the method entailed training of respondents concerning the use of the internet.
Another method that could be used to improve the validity of arguments is the use of verbal appeal to influence results. For this case, diversity means that some respondents are disadvantaged by the use of visual effects, which web-based approaches have adopted. It means that the approach has a limited methodology and lacks the human appeal that would make the processes more comfortable to the respondents.
Two features of web surveys differ from the conventional research methods. The first one is the fact that web-based surveys are self-administered and that they employ visual aspects, which are the opposites of conventional survey methods (Gesell, Drain & Sullivan, 2007). For this case, conventional survey methods cannot be accomplished on their own because they require the physical presence of their administrators, and they also employoral approaches.
This part will consider hypotheses that could be used to answer the research questions developed from part one of the study. The hypothesis picked will develop a relationship between the variables in the thesis statement and help to solve the questions posed. The thesis of the study, in this case, was the conclusion of Grandcolas, Rettie, and Marusenko that variations realized in surveys relate to sample bias and not the mode bias (Grandcolas, Rettie & Marusenko, 2003). The three authors also formulated that ramifications concerning low-response modes utilized in web surveys have more effects that remain unrecognized. The independent variables for the study question are the survey and sample modes, which produce effects on the dependent variables, which are responses generated from the population. The null hypothesis for the study is that sample modes do not affect biases of a survey while the alternate hypothesis is that the sample mode affects the levels of bias. There are some control variables in the relationships identified by the hypotheses, which may include the biological aspects of the population. Such aspects include the natural behaviors of the people about the use of web survey methods (Gesell, Drain & Sullivan, 2007). Another control variable is the type of response given after the use of either approach.
Research Question 1: Does Mode Affect The Response Rates?
Results from past surveys that indicated response rates of web and email-based surveys were inconclusive. In the current analysis, there was also a failure in the determination of the variation of the response rates with effect from the modes of the survey. The researchers studied responses from the use of both paper and web-based surveys and the results indicated that there was an insignificant variation, which means that the mode of survey does not affect the rate of responses from the population (Grandcolas, Rettie & Marusenko, 2003).
Research Question 2: Which of the Two Survey Methods is preferred by the Population?
There are situations when some organizations hire expert firms to complete employee surveys on their behalf, which means that such firms prefer paper surveys to the use of the web. However, the current survey asked people about their preferences in terms of the approaches used. The results indicated that close to 30% of the people said they preferred the use of paper surveys, 31% preferred the internet while 39% of them indicated that they did not care what approach was applied. Such results indicated that people have no preference for the method of survey used because results from the analysis indicated a negligible statistical difference between the internet and paper survey methods (Grandcolas, Rettie & Marusenko, 2003).
Research Question 3: Does the Sample Method Affect the Bias of Surveys?
Having established no coherent relationship between the modes of survey chosen and the response rates, there was a need that the study explored the effect of the sampling technique. There was a discovery that results given by the use of different sample approaches resulted in a large variation of the same results. Therefore, the variations indicated that survey modes have effects on the biases of results of a survey. The three questions gave a finding that the bias of the results of a survey result from the application of different survey samples.
Assumptions and Limitations of the Procedures
The study assumed a uniform rate of answering from respondents, which meant that the results would have been easier to analyze. The biggest limitation of the study was the fact that the results of the analysis were too small, which resulted in negligible statistical differences. There was also the limitation that the results of the survey were from the same organization (Grandcolas, Rettie and Marusenko, 2003). There was also a restriction in terms of the people and originations that were willing to participate both because of the costs and the willingness of the people. The requisition that each of the test sites gives the details of the people who participated in the study also resulted in the withdrawal of other enterprises.
Easterby-Smith, M., Thorpe, R., & Jackson, P. (2012). Management Research. London: Sage.
Fischer, H. (2011). A history of the central limit theorem: From classical to modern probability theory. New York: Springer.
Gatignon, H. (2013). Statistical analysis of management data. New York: Springer.
Gesell, S. B., Drain, M., & Sullivan, M. P. (2007). Test of a Web and paper employee satisfaction survey: Comparison of respondents and non-respondents. International Journal of Internet Science, 2(1), 45-58.
Grandcolas, U., Rettie, R., & Marusenko, K. (2003). Web survey bias: sample or mode effect? Journal of Marketing Management, 19(5-6), 541-561.