Research is undertaken and applied across many different professions and disciplines, as it offers a basis for increasing knowledge, informed decision making and action (Minichiello, Sullivan, Greenwood & Axford, 2004; DePoy & Gitlin, 2011). Within the nursing profession research is the link between theory and practice and has influenced many changes to the way that nurses practice (Schneider, Elliot, LoBiondo-Wood & Harber, 2003). Research has brought about improvements in the delivery of care which in turn contributes to improved patient outcomes (Loiselle, Profetto-McGrath, Poilt, & Beck, 2007). Nursing research has made available the best evidence to support and underpin nursing practice as it is currently: this is essential to the achievement of optimum biopsychosocial results for the patient, their family members and their wider community. Furthermore research guides legislation and regulations at a government organisation level (LoBiondo-Wood & Harber, 2010; DePoy & Gitlin, 2011; Davies & Logan, 2012).
Health research topic
This assignment will explore how research is designed, conducted and applied to investigate and inform the improvement of mental health wellbeing of those who care for people with dementia. In an international study on the global prevalence of dementia published in 2006, experts estimated that there were 24.3 million people with dementia, with 4.6 million new cases of dementia every year, one new case every 7 seconds (Ferri, Prince, Brayne, Brodaty, Fratiglioni, Ganguli & Scazufca, 2006). It is estimated that there are 50,000 New Zealanders currently diagnosed with dementia and by 2026 it is estimated to be close to 78,000 (Ministry of Health, 2013). The prevalence of this cognitive disease necessitates research to better understand the effects and implications dementia has on people and society and how society can be better equipped to face the psychosocial challenges dementia presents for those in caregiver roles.
Depoy and Gitlin (2011) define research as “multiple, systematic strategies to generate knowledge about human behaviour, human experience, and human environments in which the thinking and action processes of the research are clearly specified so that they are logical, understandable, confirmable, and useful” (Depoy & Gitlin, 2011, pg. 6). There are two major research paradigms that underpin these systematic strategies, determining how a researcher will ‘think and act’. They are positivist and the naturalistic. A positivist paradigm is most closely allied with quantitative research and naturalistic paradigm is most often associated with qualitative research (Christensen & Johnson, 2012, Loiselle et al. 2011). Each paradigm or approach is a perspective on research based on a set of shared assumptions, concepts, values, and practices (Christensen and Johnson, 2012). These two paradigms have two very distinct ontological, epistemological and methodological foundations.
Quantitative research views the nature of the knowable and the nature of reality as being objective, material and structural. This worldview or perspective is that there is a reality ‘out there’ that is separable and independent from individual. That this reality can be verified and discovered through the scientific method is the fundamental assumption of positivism or the positivist paradigm (DePoy & Gitlin, 2011; Loiselle et al. 2011; Christensen & Johnson, 2012).
The positivist or experimental-type perspective employed by quantitative research primarily follows the confirmatory scientific method because it focuses on hypothesis testing and theory testing (Christensen & Johnson, 2012). Logical positivists believe that there is a single reality that can be discovered by reducing it into parts, and discovering the relationships among them. In other words, the logical, structural principles that guide some component of reality can be known. This concept is known as reductionism (DePoy & Gitlin, 2011).
Quantitative researchers typically use deductive reasoning to identify a single reality and generate predictions or hypotheses. They then use a systematic approach, progressing logically through a series of steps, according to a prespecified plan. The researchers use various “controls” to minimise biases and maximise precision and validity (DePoy & Gitlin, 2011; Loiselle et al. 2011). Empirical evidence is rigorously and systematically gathered (directly or indirectly through the senses rather than personal hunches) using tested means. Quantitative information, being numeric information, is gathered then subsequently analyzed and measured through statistical procedures to deduce if a hypothesis is true or false (Loiselle et al, 2011). The hypothesis is confirmed or rejected on the basis of these empirical results.
Qualitative research, on the other hand, take the view that as nature of the knowable and reality are mentally constructed by individual, they are multiple, subjective and personal. This worldview claims that reality is not a fixed entity but rather a construction of individuals participating in the research. That reality exists within a context, and many constructions are possible is the fundamental assumption of a naturalistic paradigm (Loiselle et al. 2011; Christensen and Johnson, 2012). Naturalistic inquiry theorists believe that ideas and individual interpretations are the lenses through which each individual knows and comes to understand and define the world. “Knowledge is based on how the individual perceives their experiences and how he or she understands his or her world” (Depoy & Gitlin, p. 26).
Naturalistic methods of enquiry attempt to capture these dynamic, holistic and individual aspects of phenomena in their entirety, within the context of those who are experiencing them. Therefore, naturalistic investigators emphasize understanding the human experience as it is lived, usually through the collection and analysis of qualitative materials that are narrative and subjective (Loiselle et al. 2011). Qualitative methods differ from that of quantitative in that procedures are flexible and can be modified to capitalize on findings that emerge during the course of study. Qualitative studies take place locally, in a natural setting, in the field, frequently over extended periods of time. Data collection and data analysis typically progress simultaneously . Consequently, naturalistic studies yield rich, in-depth information that can potentially clarify the multiple dimensions of a complicated phenomenon (Loiselle et al. 2011).t
The scope of this assignment is to further examine and analyse quantitative research design and methodology and how it relates to the research of dementia…
Quantitative Experimental Design
In experimental-type research, DePoy and Gitlin (p. 84) describe design as the plan or blueprint that specifies the procedures used to obtain empirical evidence to determine the relationship among variables of the study. In other words, the design is structured in such a way as to enable an examination of a hypothesized relationship among variables. Generally in quantitative research, hypotheses are constructed from general principles prior to data collection and then tested during the study. Experimental design is therefore well suited to answering questions about cause and effect or causation (Minichiello et al, 2004).
The specific procedures actioned to obtain empirical evidence depend on the study and the design method used, but generally quantitative experimental designs involve sampling, data collection, data analysis and reporting. Investigators employ sampling techniques to select a subgroup that can accurately represent a population, defined as a group of persons, elements or both that share a set of common characteristics as predefined by the investigator. The intent is to be able to draw accurate conclusions about the population by studying a smaller group of elements (sample) (Minichiello et al, 2004, DePoy & Gitlin, 2011). In quantitative research the collecting of data, quantifying information or measurement is a primary concern. Therefore the researcher must ensure the data instrument used is reliable and valid (DePoy & Gitlin, 2011). Reliability refers to the degree of consistency with which an instrument measures an attribute and validity addresses the critical issue of the relationship between a concept and its measurement. It asks if whether what is being measured is a reflection of the underlying concept (Minichiello et al, 2004, DePoy & Gitlin, 2011). The instrument can be one that the researcher designed themself, modified from another study, or an intact instrument that has been used by another researcher (Creswell, 1994). When experimental-type research is conducted, the researcher’s first preference is the selection of instruments that have demonstrated reliability and validity for the specific populations or phenomena the investigator wants to study. As is the case in the quantitative research articles explored later in this assignment. Statistical analysis is an important action process in experimental-type research that occurs at the conclusion of data collection and data preparation. It is at this juncture that data become meaningful, and lead to knowledge building that is descriptive, inferential or associational. From this analysis investigators can interpret and summarize data, generalise findings to the population from which the sample is drawn, and make causal statements and predictions (DePoy & Gitlin, 2011).
That part done. now how to tie this one to the next one below.
True experimental design has three distinguishing properties namely; a randomised population sample, an intervention otherwise known as a manipulation and a control group for comparison (Nieswiadomy, 2008; Davis & Logan, 2012). By randomly assigning subjects to an experimental group and a control group, the investigator attempts to develop equivalence, or eliminate subject bias, caused by inherent differences that may occur in the two groups (DePoy & Gitlin, 2011). Investigators then manipulate an independent variable (IV) so that the effect of its presence, absence, or degree on the dependent variable (DV) can be observed. Manipulation is the action process of manoeuvring the independent variable for example the (IV) could be medication, a teaching plan or treatment etc. (Minichiello et al. 2004; LoBiondo-Wood & Harber, 2010; DePoy & Gitlin, 2011). The dependent variable is the variable that has changed due to the result of the manipulation e.g. the measured end result (Dempsey & Dempsey, 2000; Minichiello et al, 2004). This enables researchers to study ‘cause and effect’ relationships (LoBiondo-Wood & Harber, 2010; Hedges & Williams, 2014). Within the health arena the ‘causes’ are often the interventions or treatments and the ‘effects’ are the final outcomes (Minichiello et al, 2004; Moule & Hek, 2011). The control group is the comparison group that receives the usual treatment or care, compared to the experimental one under scrutiny. This ‘true’ experimental design is referred to as a randomized controlled trial (RCT) (LoBiondo-Wood & Harber, 2010). RCT’s are considered the ‘superior’ design when investigating cause and effect relationships (LoBiondo-Wood & Harber, 2010; Loiselle et al. 2011).
This control of variance and over extraneous influences inherent in experimental design allows the researcher to state with a degree of statistical assuredness that the study outcomes are a consequence of either the manipulation of the independent variable or the consequence of that which was observed and analysed. In other words, the design provides a degree of certainty that an investigator’s observations are not haphazard or random but reflect what is considered to be a true and objective reality. Quantitative experimental designs therefore eliminate bias and the intrusion of unwanted factors that could confound findings and make them less credible (Depoy & Gitlin, 2011).
Although the true-experiment design is continually upheld as the best design to use to predict causal relationships, being the most ‘objective’ and ‘true’ scientific approach, it may be inappropriate for other forms of inquiry in health and human services. This is because not all research questions seek to predict causal relationships between independent and dependent variables. Moreover, in some cases, using a true-experimental design may present critical ethical concerns such that other design strategies may be more appropriate.
this part doesnt quite fit yet. According to DePoy & Gitlin (2011) quantitative or experimental type research define four categories namely non-experimental, quasi-experimental, pre-experimental and true experimental. In relation to the chosen articles, experimental design will be discussed as both articles are randomized controlled trials. Should we delete this paragraph? dunno. what does part 2 say about experimental and non-experimental.
Depoy & Gitlin (2011) suggest that a design in the experimental-type tradition should be chosen purposively because it fits the question, level of theory development, and setting or environment in which the research will be conducted.
The next part of this assignment will examine two such pieces of research demonstrating purposeful use of experimental design in the specific research of curtailing psychosocial effects such as depression and mental health wellbeing of caregivers caring for someone with dementia.
Experimental Research Examples
The research articles chosen to critique are both RCT’s. Both are trials investigating the wellbeing of family caregivers of people with dementia. The objective of one study was to investigate the effectiveness of the intervention of a home based training programme supporting family caregivers with a family member who have dementia. The study used the “Medical Outcomes Study 36-item Short Form Survey” to collect data on physical well-being and the Chinese adaptation “Center for Epidemiologic Studies Depression Scale” to measure depressive symptoms (Kao, Huang, Huang, Lian, Chiu, Chen, Kwok, Hsu & Shy, 2012). Results from the design showed positive statistical measurements for each category of physical health outcomes and a decreased rate in risk of depression for those in the experimental group compared with the control group. This study concluded that the home based caregiver training programme significantly improved the quality of life relating to health and decreased the risk for depressive symptoms (Kao, Huang, Huang, Lian, Chiu, Chen, Kwok, Hsu & Shy, 2012) . The other RCT was investigating the effectiveness of an internet intervention “Mastery over Dementia” supporting family caregivers of people with dementia. Results from the regression analyses showed caregivers of the experimental group had decreased symptoms of depression and anxiety. Concluding that the internet course was an effective treatment (Blom, Zarit, GrootZwaaftink, Culjipers & Pot, 2015).
Experimental research design has strengths and weaknesses. The main strength of experimental research design is that they are the most effective for measuring cause and effect relationships (LoBiondo-Wood & Harber, 2010). Due to the data obtained from experimental research designs knowledge has been applied and changes have been integrated in to action (Carr, 1994). The strength of random sampling is that it increases the possibility of being generalizable in the finding, however random selection is very time-consuming (Carr, 1994). With experimental type design the researcher remains detached from the subjects, the strength of this approach is prevention of researcher involvement guarding against any bias within the study (Carr, 1994). However they are often complex and unrealistic to measure in clinical environments and can be disruptive to peoples routine when implemented (LoBiondo-Wood & Harber, 2010) . Other variables that can impact on findings is that when an intervention is being administered by different people e.g. it is impossible to ensure different nurses deliver the intervention in the same way with each person (LoBiondo-Wood & Harber, 2010). A further weakness is that many interventions required for studies are not agreeable to ethical consent e.g. doing an experimental design on people who smoke to measure adverse side effects (LoBiondo-Wood & Harber, 2010). Due to these weaknesses many researchers resort to quazi-experimental design
(Rewrite in own words)There is nothing inherently good or bad about a design. Every research study design has its particular strengths and weaknesses. The adequacy of a design is based on how well the design answers the research question that is posed. That is the most important criteria for evaluating a design. If it does not answer the research question then the design, regardless how rigorous it may appear, is not appropriate. It is also important to identify and understand the relative strength and weakness of each design element (DePoy & Gitlin, 2011).