Clinical studies may be grouped into different categories, such as experiments, surveys, and observational research. Nevertheless, the researcher must meticulously plan to accomplish the goal of each research. Various elements determine the development of an excellent study. The research must identify and define an operational research problem. A clear description of the observational and experimental components, required sample, and control groups must be provided. Precisely, the researcher must describe the criteria for inclusion and elimination, which specifies the potential factors that may impact the measured units and observations. A distinct research design and the related methodology must be provided. Considering these variables, adequate sample size must be selected for the study. The selected sample must be large enough and statically significant, and small enough to ensure that the effect of small scientific significance is statistically measurable. Economic factors play an important role in the selection of the sample size because undersized research may be a misuse of resources.
The selection of sample size is an important ethical concern in studies comprising people or animal participants because a poorly designed study makes the participants vulnerable to harmful treatments without prior knowledge (Shuster, 1990). Therefore, a basic phase during the design of experimental studies is the power and sample size analysis. Power describes the possibility of appropriately eliminating the null hypothesis that the selected sample frequencies do not statistically vary across the defined research groups. High magnitudes of power are required, and a higher sample size increases the power level. Thus, a researcher may regulate the research power by modifying the sample size (Wittes, 2002). Experimental research is stated as an approximation of effect size, P-value, and confidence interval. The P-value defines the possibility that the experiential influence in the sample is coincidental. P-value is linked with the power, which defines the possibility of determining the precise dissimilarity between two groups between the samples when one indisputably subsists in the sample’s population.
This report is divided into two parts and describes how G-Power software is used for determining the required sample size and P-value of a study. The second part of the paper summarizes two research designs that can be used to address the proposed research question, which investigates the influence of workplace fun on employee performance and organizational productivity.
Using t-test to calculate Sample Size in G-Power
The G-Power software was used to compute the sample size required for experimental research based on specified factors. Research design factors were a one-tailed test to classify the sample size of two evenly spread independent sets and a small effect size with standard alpha and beta values. Figure 1 in the appendix is a screenshot of the power analysis performed using G-Power software.
The test family selected for the analysis was the “t-test” and the “Means: Difference between two independent means (two groups)” was the statistical test. A value of 0.2 was input as the effective size d because a small effective size factor was given. An alpha value of 0.05 and a Power of 0.8 were used. Both sample size groups were equal, which made the “Allocation ration” = 1. Figure 2 in the appendix illustrates these inputs.
The output of the G-Power analysis, illustrated in Figure 3 in the appendix, indicated a total sample size of 620 was appropriate for the research study based on the factors provided.
If the computed sample size surpasses what can be afforded, the compromise function may be used to derive ἀ and ß for a sample that is 50% of the magnitude. A new sample of 620/2 = 310 will be obtained. Assuming alpha = beta, then the ἀ/ß ratio = 1. The effective size is unchanged at 0.2 and the sample sizes for group one and group 2 are 155 respectively since the two groups are evenly distributed. A screenshot of the analysis is illustrated in Figure 4 in the appendix.
From the compromise functions, alpha and beta had an equal magnitude of 0.189487. Power is computed as 1 – beta, is therefore 1 – 0.189487 = 0.810513. Figure 5 in the appendix illustrates the results of the compromise function test. The sample size is sufficient for a study if the power is greater than 80% (Trochim & Donnelly, 2008). Since power (81.05%) is greater than 80%, the smaller sample size is worth using for the study.
Using ANOVA to calculate Sample Size in G-Power
The G-Power software was used to compute the sample size required for experimental research based on specified factors. Research design factors were an ANOVA (fixed effects, omnibus, one-way) test to identify the sample size for three groups and a small effective sample size with alpha and beta values of 0.5 and 0.2 respectively. Figure 6 in the appendix is a screenshot of the power analysis performed using G-Power software.
The test family selected for the analysis was the “F-tests” and the “ANOVA: Fixed effects, omnibus, one-way” was the statistical test. A value of 0.10 was input as the effective size d because a small effective size factor was given. The alpha value (ἀ) of 0.5 was input and the Power 0.8 (1-ß) was used. The sample was divided into three groups and this was input in the “Number of groups” field. The figure in the appendix below illustrates these inputs.
The results of the G-Power analysis summarized in Figure in the appendix below show that a total sample size of 261 was sufficient for the study based on the provided factors.
If the resulting sample size is beyond what can be afforded, the compromise function will be used to calculate the alpha and beta values for a smaller sample. A new sample of 132 will be used since it is approximately 50% of 261 and is a multiple of three (Trochim & Donnelly, 2008). Assuming alpha equals beta, the beta-alpha ratio will equal 1. The effective size will remain 0.01. The screenshot in Figure 9 in the appendix illustrates the analysis.
From the compromise functions, alpha and beta had an equal value of 0.399302. During a compromise, P-value analysis ἀ and ß are calculated as variables of the effect size, N, and the probability error ratio is calculated as, q = ἀ/ß. For balanced error risks, q must equal 1. ἀ is assumed to equal ß to ensure a balance between type-1 and type-2 errors (Piasta & Justice, 2010). The P-value (1 – ß), is 1 – 0.399302 = 0.600698. The figure in the appendix below summarizes the output of the compromise function test. The sample size is appropriate for a study, only if the P-value is greater than 80%. Since P = 60.07% (less than 80%), then the smaller sample size is not worth using for the research. The researcher must maintain the original sample size.
Research Designs for Research Question
Types of design
The purpose of the proposed research is to investigate the relationship between workplace fun and employee productivity. A researcher must understand the effect of a design method on the study before proceeding with the design. Two types of research design that can address this research question are the quasi-experiment design and a survey research design.
Survey Research Design
The survey research design tries to define and elucidate current circumstances by using various topics and questionnaires to completely explain a trend (Piasta & Justice, 2010). For the proposed research, the survey research design will be used to compare the productivity of two evenly distributed sample groups. One group (the experimental) will comprise employees that experience workplace fun, while the control group will comprise employees who do not experience fun. Participants’ perceptions of their productivity will be analyzed and correlated with their organization. The result of the analysis will inform the researcher’s conclusion.
The success of the proposed research design depends on the researcher’s ability to choose the correct sample size. A one-tailed test will be used to determine the adequate sample size. Using a one-tailed test is effective for determining the adequate study sample since the two groups will be evenly distributed (Demidenko, 2008). A large effect size is assumed for the proposed study because survey research designs require large participants (Piasta & Justice, 2010). Standard alpha and beta values of 0.05 and 0.2 are assumed, respectively to place the P value at 80% (Demidenko, 2008). The G-Power analysis produces the following results.
The results of the analysis show that a sample size of 42 will be sufficient for the research. This site is affordable and a compromise analysis is not required.
Quasi-Experimental Research Design
A quasi-experimental research design can be used to answer the proposed research question. Quasi-experiments estimate the research design but have no influence over the groups. The quasi-experiment will comprise four sample groups reporting their experiences of workplace fun and organizational productivity.
An ANOVA (fixed effects, omnibus, one-way) will be used to determine the adequate sample size. The ANOVA test will be effective for calculating the adequate study sample since three groups are involved. A small effective sample size will be assumed for the planned quasi study because quasi designs are successful under various sample size conditions (Trochim & Donnelly, 2008). Standard alpha and beta values of 0.05 and 0.2 are assumed, respectively to place the P value at 80% (Piasta & Justice, 2010). Figure 11 in the appendix illustrates the outcome of the G-Power analysis.
The test result shows that a sample of 969 will be sufficient. A compromise analysis tests if the sample can be reduced. Assuming a total sample size of 483, the G-Power analysis produces a P value of 0.772041 (77.2%) as shown in figure 12 in the appendix. 77.2% is lower than the band value that indicates an adequate sample therefore the sample of 969 should be used for the study.
Demidenko, E. (2008). Sample size and optimal design for logistic regression with binary interaction. Statistics in Medicine, 27(2), 36-46.
Piasta, S. & Justice, L. (2010). Cohen’s d statistic. In N. Salkind (Ed.), Encyclopedia of research design (pp. 181-186). Thousand Oaks, CA: SAGE Publications.
Shuster, J. J. (1990). Handbook of sample size guidelines for clinical trials. FL: CRC Press.
Trochim, W. M. & Donnelly, J. P. (2008). The research methods knowledge base. Mason, OH: Thomas Custom.
Wittes J. (2002). Sample size calculations for randomized controlled trials. Epidemiol Rev., 24(3), 39–53.