navbar 4Resource papers in action research
 

Rigour and relevance in action
research

 

This is a resource file which supports the regular public program "areol" (action research and evaluation on line) offered twice a year beginning in mid-February and mid-July.  For details email Bob Dick  bdick@scu.edu.au  or  bd@uq.net.au

...  in which action research is offered as a rigorous methodology in those situations where experimental and quasi-experimental methods are very hard to apply successfully

 

Imagine this situation...

You are a practitioner.  You are acting as consultant to a client group within a change program.

The problems are unknown, so a lot of initial diagnosis is needed.  You know that much of the understanding about the client situation will emerge slowly during the study.  Time is limited, so time-efficient methods must be used.  You work hard for long hours, and can not find time for much extra.  The client group expect you to involve them closely in the program.

You would like the program to be appropriate and successful.  You would therefore like the client group to understand what they are doing.

You would also like, if possible, to increase your own understanding of people, and systems, and change.  In other words, you would like to combine research with your consultancy.

You probably studied research methodology in your degree.  You probably did one or two independent research projects.  You learned to value scepticism and empiricism.

But you've found that it's very hard to apply, as part of your practice, the methods you were taught.

For example, you don't yet know enough to have more than a very general research question.  You have no way of knowing what many of the variables are, let alone controlling all of them.  You can't standardise your research process, and be flexible and responsive to the situation at the same time.  You have to involve the client group.  There is no simple way in which you can use an appropriate control group.

What are you to do?

 

It is known what most practitioners do if they were taught conventional experimental and quasi-experimental methods.  Barlow, Hayes, and Nelson (1984) and Martin(1989) provide the answer.  Most practitioners decide that research just doesn't fit with practice.  Thereafter, they avoid trying to do research.

A few persist.  They often use fairly loose quasi-experimental methods, often very poorly.  There are quasi-experimental designs which provide high levels of rigour.  There are quasi-experimental designs which can be done cheaply and combined with practice.  Not much quasi-experimentation achieves both of these at once.

There is an alternative -- to use action research.

To some people schooled in conventional approaches, it looks like poor research.  They apply the criteria that are appropriate for their style of research -- quantification, control, objectivity, and so on.

What do they find when they use these criteria to judge action research?  There isn't a precise research question.  The researcher involves the clients in the research process.  There probably isn't much quantification, if any.  Far from holding the situation constant, researcher and clients actually try to change it.  For that matter, they change the design itself.  ...

 

If you are someone who uses experimental or quasi-experimental designs, I have an invitation to offer: try judging action research on its merits.  Step outside your own paradigm.  In fluid field situations, compare it to most quasi-experimental research you've experienced.

I encourage you to practice the scepticism you value.  However, be sceptical about your own approach, too.

I predict you will find that, in some situations, action research can deal with the difficulties better than other approaches.  In its own way it achieves high levels of rigour.  It just goes about doing so in ways that conventional researchers don't expect.

Please note that I said "some situations".  I'm not valuing action research over other research paradigms.  All I'm doing is pointing out that it is designed to achieve rigour in settings where other paradigms have problems doing so.

 

Those practitioners who deal with complex social systems, and who continue to try to do good research, are drawn in similar directions.  According to Cook and Shadish (1986), that is what has happened in the field of program evaluation.  In a different field Checkland (1981) documents his own move from hard systems analysis to soft systems analysis.  In instances such as these, the imperatives of the situation are driving the changes.

I don't think this is all that surprising.  "The scientific method" wasn't developed by using the scientific method.  It was a bootstrap operation.  It evolved.

It evolved to suit particular outcomes in particular environments.  I would expect a different environment to select for a different "species" of research, by a different history of evolution.

 

Compared to some approaches action research is a recent development.  Many of its users have learned to apply it well enough.  They learned to do it the way they learned most of their practice: by doing it.  Many of them are not very good at explaining it in logical terms to people who expect something different.  If it comes to that, many of them don't try.

So, most of its users don't try to justify it to the sceptics.  They just do it.

 

I would like to explain how action research can achieve high levels of rigour, and without sacrificing the responsiveness and flexibility that some situations require.

A scientific claim is an assertion, not a fact.  What makes it scientific is that, in the words of Phillips (1987), quoting Dewey (1938), it is "warrantable".  In the course of a typical change program, very many assertions must be made.  The difficulty is to make them adequately warrantable.

An assertion is an interpretation of evidence.  The evidence is drawn from the data in the study, and from the literature.  To be warrantable, I assume, the interpretation must have been reached only after attempts to exclude other interpretations.  Further, it must account for the evidence as well as, or better than, the alternative interpretations.

The interpretation can only be as good as the evidence on which it is based.  The evidence therefore must be an adequate sample of all the evidence which might have been collected.

Research must address this, while observing the "givens" of the situation.  For example...

First, at each cycle the researcher may try to disconfirm the emerging interpretation.  The use of many short cycles allows more chances to disconfirm.

Second, at each cycle the methods used can be critiqued and refined.

Third, data collection and interpretation can be included in each cycle.  Thus both data and interpretation can be tested in later cycles.

Fourth, divergent data can be specifically sought out.  This increases the chance that any piece of data or interpretation will be challenged by other data.  (To some extent it may also make a partial asset of the participation by the client group.)

Fifth, the literature can be used as a further source of possible disconfirmation.  The researcher who has deliberately sought disconfirming literature, and failed to find it, has a more warrantable assertion than could otherwise be claimed.

Sixth, the planned changes which emerge from the program are derived from the data and the interpretation.  That change offers a further opportunity for disconfirmation.

Where flexibility and participation are required, and the situation is complex, any research methodology faces serious threats to validity.  I would claim that action research better meets those threats in these circumstances than conventional research.

 

Note that there are many claims that I can not make, and have not made.  I have not claimed that action research can yield causal explanations.  I have not claimed that the findings are necessarily generalisable (though in fact there are ways of addressing this).  I have not sought to criticise other research paradigms -- in some other situations they provide a better fit than action research does.

What I have tried to do is demonstrate that, within the givens, action research can be done in such a way that the resulting assertions are warrantable.

_____

Barlow, D.H., Hayes, S.C., and Nelson, R.O.  (1984) The scientist practitioner: research and accountability in clinical and educational settings.  New York: Pergamon.

Checkland, P.  (1981) Systems thinking, systems practice.  Chichester: Wiley.

Cook, T.D.  and Shadish, W.R.  (1986) Program evaluation: the worldly science.  Annual Review of Psychology, 37, 193-232.

Dewey, J.  (1938) Logic: the theory of inquiry.  New York: Henry Holt and Co.

Martin, P.R.  (1989) The scientist practitioner model and clinical psychology: time for change?  Australian Psychologist, 24(1), 71-92.

Phillips, D.C.  (1987) Philosophy, science, and social inquiry: contemporary methodological controversies in social science and related applied fields of research.  Oxford: Pergamon Press.

_____

 

Copyright (c) Bob Dick 1995-2005.  This document may be copied if it is not included in documents sold at a profit, and this and the following notice are included.

This document can be cited as follows:

Dick, B.  (1997) Rigour and relevance in action research [On line].  Available at
http://www.uq.net.au/action_research/arp/rigour.html


 

 

navbar 4 

 
Maintained by
Bob Dick; this version 1.04w last revised 20050817

A text version is also available at URL ftp://ftp.scu.edu.au/www/arr/rigour.txt