Destructive Behavior in Evaluations

This posting discusses what some evaluators have experienced as destructive behaviors by stakeholders during evaluation projects.

For example, my experience and the experiences of my associate while evaluating a safety program for a school district is not discussed in our article, “How the School Built Environment Exacerbates Bullying and Peer Harassment (Fram and Dickmann 2012).” Our experiences were not as extreme as we have heard, but have similarities to such experiences by Bechar & Mero-Jaffe (2014), who stated in their article:

Overtly, the program head had agreed to the evaluation and recognized its importance, covertly, his attitude showed inconsistency in his willingness to support the evaluation, which was expressed during the course of the evaluation and in his reaction to the final report; we interpret these as sign of fear of evaluation (p. 369).

We did not interpret our experience as a sign of fear, but a serious issue involving the cohesiveness of district leaders involving the superintendent and all of the principals of the schools. Our evidence pointed towards the leadership style of the superintendent and the politics embedded in the school system. We even stated in our article that politics at the district level had a major negative impact on safety program at the schools. If any fear of evaluation from the stakeholders existed if was purely concerning the possible loss of their job. Any fear we witnessed was a symptom of the dysfunctional relationships among the stakeholders.

Bechar & Mero-Jaffe refer to Donaldson’s (2007) introduction of a new term, excessive evaluation anxiety (XEA). I am inclined to say that our experiences had little connection if any to this phenomenon.

I am proposing two ways to lower the possibilities of experiencing destructive behavior from stakeholders.

1. More thoroughly plan the initial stakeholder meeting and incorporate an informative session (whether on another day or same day) to discuss the perceptions of and concerns of all of the stakeholders involved. Meet with individual stakeholders afterwards to further address their concerns in private.

We did have an informative and well-planned meeting with the stakeholders together and my associate and I both met separately with each stakeholder to address any concerns that they had. For the most part, we believe that the individual meetings improved our chances of collecting data effectively with support from the stakeholders. One of the principals wanted to be involved and we made this happen even though the superintendent did not want any principal involved at any stage. We asked the principal to help us orchestrate the informed consent meeting involving the teachers and I asked the principal to walk with me as I took photographs to offer me insight about particular spaces of the school. My associate was able to show the superintendent the benefit of having the principal involved.

2. Before the initial stakeholder meeting schedule and orchestrate a focus group designed from a social constructivist perspective or what Ryan, Gandha et al. (2014) called a Type B focus groupThe style of focus group reveals tacit knowledge during social participation. This focus group is effective at getting at what is hidden during social interactions. With targeted questions intermingled among conversation, a moderator, can tease out underlying politics, beliefs and facades of social relationships. In addition, targeted and informative comments by the moderator can help to better inform the stakeholders about the evaluation process and eliminate confusion and misunderstandings. Ultimately, this additional data adds richness.

I believe that had we completed such a focus group with all of the stakeholders, we would have had a more effective evaluation; which would have benefited all of the stakeholders. We were not able to collect all of the necessary data needed to complete a thorough evaluation of the safety program in all of the schools. We were only allowed to focus on the elementary schools for collecting data.

I digress:

The Bechar & Mero-Jaffe article does concern me. As an outsider regarding the experiences, I have to say that while reading the article, I felt that the authors were still too sensitive about their experiences. Their description and choice of information to present regarding the program head as one of the stakeholders was unsettling to me. I question whether or not this article should have been published at all.

Regarding the use of the term, excessive evaluation anxiety, I have to add that just because destructive behavior exists does not mean that it always points to some anxiety about evaluation. In our case, the destructive behavior pointed to a dysfunctional relationship among the stakeholders and how such relationships had a negative impact on the safety program.

 

References:

Bechar, Shlomit & Mero-Jaffe, Irit. (2014) Who is afraid of evaluation? Ethics in evaluation research as a way to cope with excessive evaluation anxiety: Insights from a case study. American Journal of Evaluation, 35(3): 364-376.

Donaldson, S. I. (2007). Program theory-driven evaluation science: strategies and applications. London, England: Routledge.

Ryan, K. E., Gandha, T., Culbertson, K. J. and Carlson, C. (2014). Focus group evidence: Implications for design and analysis. American Journal of Evaluation, 35(3): 328-345.

 

What is “Big Data?”

Recently,

I have been talking with folks in various fields of research. One term that has caught my interest is different understandings that people have for the term, “big data.”

I chatted with a friend of mine, recently, about his job interview experience. The confusion was terrible. My friend, an economist, was being asked questions about his experience with big data by a Manager, with an engineering background. The Manager’s understanding of big data was similar to the understanding that most IT folks have; whereas, my friend’s understanding the term was similar to what many economists have. Needless to say, neither of them understood what the other person was trying to say. The interview was clouded with confusion.

When talking with several economists, their understanding was consistent among them that big data is really reference to high-frequency data, or real-time data continuously being collected. For example, financial data on the activities on stock market for a specific company is collected every second. Or household data that is collected every 15 minutes for electricity usage.

When talking with several IT people, their understanding was consistent among them that big data was huge data sets that are too large to manipulate with standard methods or tools. This doesn’t necessarily refer to high-frequency data.

Even Professors Ward and Barker at the University of St. Andrews in Scotland found out just how confusing the use of the term has become. http://www.technologyreview.com/view/519851/the-big-data-conundrum-how-to-define-it/      They list these definitions:

1. Gartner. In 2001, a Meta (now Gartner) report noted the increasing size of data, the increasing rate at which it is produced and the increasing range of formats and representations employed. This report predated the term “dig data” but proposed a three-fold definition encompassing the “three Vs”: Volume, Velocity and Variety.This idea has since become popular and sometimes includes a fourth V: veracity, to cover questions of trust and uncertainty.

2. Oracle. Big data is the derivation of value from traditional relational database-driven business decision making, augmented with new sources of unstructured data.

3. Intel. Big data opportunities emerge in organizations generating a median of 300 terabytes of data a week. The most common forms of data analyzed in this way are business transactions stored in relational databases, followed by documents, e-mail, sensor data, blogs, and social media.

4. Microsoft. “Big data is the term increasingly used to describe the process of applying serious computing power—the latest in machine learning and artificial intelligence—to seriously massive and often highly complex sets of information.”

5. The Method for an Integrated Knowledge Environment open-source project. The MIKE project argues that big data is not a function of the size of a data set but its complexity. Consequently, it is the high degree of permutations and interactions within a data set that defines big data.

6. The National Institute of Standards and Technology. NIST argues that big data is data which “exceed(s) the capacity or capability of current or conventional methods and systems.” In other words, the notion of “big” is relative to the current standard of computation.

In the field of education, the use of the term, big data, is convoluted with understandings about  learning analytics(LA), or collecting a large amount of learner-produced data use to predict future learning, and data-mining (EDM) focused on developing and improving methods for extracting meaning from large data sets with data on learning in educational settings. EDM originated from tutoring paradigms and systems; whereas, LA originated from learning management systems.

When talking with two sociologists, their understanding was more inline with the IT peoples’ understanding–that big data concerned huge data sets that required substantial computing power to extract any meaning from the data. One sociologist referred to her use of a huge data set gathered from tweets on Twitter within a short time span on one day.

Several points are clear. 1.) We have too much confusion going on regarding the definition of the term, big data. 2.) Each field of study is going to have to come up with their own term to use because the use of “big data” causes too much confusion across the board. 3.) Each field needs to work on resolving the confusion that exists within their own field regarding the proper terms to use. Finally, 4.) Academic Journal Boards and Editors must require authors to use the most appropriate term in their publications. For example, use “high-frequency data” instead of “big data.”

Research versus Evaluation

In response to a recent blog post by BetterEvaluation: Week 19: Ways of framing the difference between research and evaluation

I appreciate the precise details offered in the blog. Such information offers clarity that is sometimes assumed. Where I feel that more clarity is needed is in the use of the term, “category” to differentiate and identify the terms, “research” and “evaluation.” The differences between research and evaluation point towards research and evaluation being different methodologies.To reiterate, I continue to follow the definition for the term, methodology, as used by Strauss and Corbin (1998): “a way of thinking about and studying social reality” (p. 3). Some people will use the term, paradigm, instead of methodology. The BetterEvaluation blogger writing the blog states that research and evaluation can be understood as a dichotomy. I disagree that research and evaluation are dichotomous, but more so can be viewed as methodologies along a continuum highlighting the change in distance between the similarities because of the differences. In addition, the blogger’s effort to show various connections  and the supportive qualities of the methodologies for each other seems to contradict the use of the term, “dichotomy.” As independent methodologies, they can overlap, during the data collection and data analysis processes regarding the type of methods being used; as well as, overlapping during other components of a study. Such overlapping always occurs because of the type of research question being asked. Research questions focusing on investigating programs can include an investigation on social processes. Research questions focusing on investigating social processes can include an investigation of the connection of a social process to the impact of a policy or program. These are just two examples of many.

To conclude, I believe that the discussion of the differences between the methodologies of research and evaluation should begin with the emphasis of the research/study questions as the ultimate element/the point of origin for deciding which methodology is of concern for a study and how these two methodologies interact in a study.

 

The Problem of Missing Information for Qualitative, Data-Coding Methods in Published Articles

I have begun the arduous task of weeding through large collections of qualitative research literature to find out the answer to one question. I will publish a commentary in the near future on this problem of missing information for qualitative, data-coding methods in published articles. Many I have talked with have made comments about how journal editors love to make information short and sweet or less important information adequately summarized. I am glad to share your comments and include them in my commentary per your permission to do so.

(Excerpts from  my  commentary draft)

“As one looks through the abundant literature on particular qualitative research studies one can find a small percentage of publications that walk the reader through the precise steps of coding several kinds of data (e.g. Miles and Huberman 1994; Patton 2002; Saldaña 2013). Overall, most descriptions of a coding process had missing information. This leads to the focus of this commentary, which is finding out why a huge lack of details exists within qualitative research literature regarding the specific and precise details for data-coding methods. While continuing my review of the literature, I began to search for an answer as to why such a lack of information existed at all. Jochen Gläser and Grit Laudel 2013 had discussed this omission as well, stating:

…many qualitative methods claim to lead to an answer to the research question but do not specify all the steps between the text and the answer. This is not surprising because qualitative research places heavy emphasis on interpretation. Interpretation is an ill-structured activity for which no algorithm can be provided. At the same time, the widespread reluctance to define intermediary steps and their outputs makes it often difficult to assess the contribution of a specific method along the way from texts to answers to research questions and the quality of that contribution (para 3).

Gläser and Laudel (2013) highlight that the quality and trustworthiness of qualitative studies is being called into question because of such “widespread reluctance.” I contacted Dr. Gläser via email to request further clarification on their use of the term “reluctance” to describe the cause of the situation and to share my understanding of the cause of the situation as “widespread taken-for-grantedness.” Dr. Gläser (to paraphrase) stated that they understood “reluctance” to include “taken-for-grantedness,” through “inability” to “unwillingness.” He gave me permission to share his comment, and as such he stated:

1.”Much qualitative research produces descriptions of phenomena in the light of different ideas or frames of reference. Each of the purposes of applying a method is different, which makes the application of the method look different. Thus, the application and description of methods is made difficult because “the qualitative paradigm” cannot agree on a limited set of types of purposes and link them to the choice of methods. Qualitative research doesn’t have a methodology that links types of research aims to types of methods” (Dr. Gläser, email correspondence, April 25th, 2014).

2.”Important and frequently used methods such as coding are incompletely described. The internet is full of questions of the kind ‘I have my codes. What do I do now?’ People (particularly students) just don’t know, and cannot learn it because neither the original developers of a method nor their followers ever cared to describe the method fully. If you look at textbooks, you will usually find very general descriptions followed by very specific examples. The middle ground (which is the most important for learners) is just not covered. As a result, many researchers do something they would rather not put in writing. Also as the result, they do not know whether they did the right thing, and don’t dare describe it in their publications. I think this is the most important factor preventing progress” (Dr. Gläser, email correspondence, April 25th, 2014).

In agreement with Dr. Gläser, I add that a lot of authors refer the reader to the original authors of a method and their publication introducing the method, and then offer a short summary of the method. Little is said about any situations where the researcher had to adapt and make changes during the process of performing the method. It is as if the researcher perfectly followed every step without any variations or changes. Those articles that do admit to having experienced any variations, quickly summarize such experiences.

In keeping with the desire of most qualitative researchers, some growing pains must occur and one is warranted at this stage because I find it unsettling that at this stage in advancement in qualitative research methods use that I can still easily find articles that leave out important details about such processes as the use of  data-coding methods.

Dr. Gläser further commented:

“I think the only way forward would be the advocates of the various methods agreeing on a standard for describing them in publications and the editors of journals demanding such descriptions. Their reluctance to do so points to a much more fundamental question: Do qualitative researchers believe or not believe that methods and the quality of their applications make a difference to their results?” (Email correspondence April 25th, 2014).

The purposes of my commentary are to address this widespread reluctance or taken-for-grantedness and to answer the question, “Why does a lack of details exist within qualitative research literature on data-coding methods use?”

I thank Dr. Gläser for his insight and honesty. I hope that we can resolve this problem quickly and neatly, without unnecessary fussiness, so as to be able to move on to other pressing problems in the field.

A Short Book Review of Cost-Effectiveness Analysis, 2nd ed.

A short book review of Cost-Effectiveness Analysis, 2nd ed.

Levin, H. M. and McEwan, P. J. 2001. Cost-effectiveness analysis, 2nd ed. Thousand Oaks, CA: Sage Publications. ISBN 978-0-7619-1934-6 (pbk)

I recently completed reading the book, Cost-Effectiveness Analysis (2nd ed.) by Henry M. Levin and Patrick J. McEwan. The authors show you how and when to complete cost-effectiveness, cost-benefit, cost-utility and cost-feasibility analyses, as well as, sensitivity analyses. I recommend this course book to those in the field of Education. I am not saying that the content is not applicable to other fields of study, but most of the examples offered in the course book are focused on educational programs. Each detailed chapter is complete and precise, for the most part. The back of the book offers short literature reviews  of various topics as goods examples on completing specific cost analyses for particular kinds of educational programs. The authors do stress that not a lot of literature is available in general on cost analyses in education.

The only major problem I encountered  while reading the book involved a lack of details for a specific calculation. This major omission needs to be included in a current edition.

In the Cost-Benefit Analysis chapter on pages 178-179:

[NB= net benefits, B=benefit, C=cost, t=year in a series of years from 1 to n, i=discount rate]

When completing a cost-benefit analysis, an evaluator is comparing the benefits with the costs of each alternative project. The authors go into detail about the three ways to make informed investment decisions: benefit-cost ratio, net benefits and  the internal rate of return.

Calculate BCR= B/C If the ratio is greater than one, then the benefits outweigh the costs.

Calculate NB=B-C  A positive result means the project is desirable.

Here is where the trouble started. 

The authors show you the calculation for the internal rate of return (IRR), but they instantly JUMP from stating the calculation to adding an IRR of 0.349 without telling you or showing how they got this number. Levin and McEwan (2001) stated, “The IRR in the numerical example turns out to be approximately 0.349 (or 34.9%)” (p. 179). It is assumed that they randomly just starting plugging in numbers to calculate the IRR. But they do not show you any further examples of “plugging in” the numbers. This omission is important to include because this book is teaching readers how to run these calculations!

IRR is defined as the discount rate that causes the net benefits to equal zero. I have included the calculation IRR.

The authors continue to state that the i = 0.349, but don’t show you how they got this number.

The authors show an example by inputing numbers from a previous example using CBA for a program. Here is an example of the IRR calculation. IRR example

I offer an example of what type of calculation should be included to show the reader how they “plugged in ” numbers and through trial and error figured out what the IRR was. The authors started plugging in 0.1, 0.2, 0.3, 0.4, ETC. (They never state at what point they stopped plugging in numbers.)

IRR example and omission

When reading this book, please, include the information that I have in this post. This additional information will help you to smoothly transition to the next chapter. I hope this helps you. Cheers!

 

 

 

 

A Book Review of When to Use What Research Design

I wanted to share another book review that I have just completed. It will be published shortly in the International Journal of Social Research Methodology. I reviewed the reference book, When to Use What Research Design by W. Paul Vogt, Dianne C. Gardner and Lynne M. Haeffele. Overall, I found this reference book to be refreshing. The authors did a very good job. This is great book for researchers and evaluators to have on their shelf in the office at work.

When to Use What Research Design

By W. Paul Vogt, Dianne C. Gardner, and Lynne M. Haeffele (The Guilford Press, New York, NY, 2012), 378pp., $63.65 (hbk), ISBN 978-1-4625-0360-5, $39.49 (pbk), ISBN 978-1-4625-0353-7 (Also available in eBook, $31.08)

Reviewed by: Sheila Fram, PhD

When to Use What Research Design—by W. Paul Vogt, Dianne C. Gardner and Lynne M. Haeffele—defines survey, interview, experimental, observational, archival and combined research designs. The reference book focuses on the “design package” or the development of the study design from research question through sampling through selection to data collection. The book is organized in an easy-access order of information from design to sampling to ethics. The aim of the book is to help researchers make their design, sampling and ethical choices “more deliberate and defensible” (p. 1). Having used the book while starting my own study, the book did help me to quickly locate the information I needed and it listed specific literature to review for a more detailed understanding on a topic.

I praise the book’s emphasis on the design development process from the research question to the research design; highlighting the fact that the research question always decides what steps will follow and what decisions will need to be made. The authors included wonderful research question examples for each research design. For example, the authors distinguish between a “causal process” and an “outcomes” research question for experimental design, “The outcome question might be: Do smaller classes (independent variable) increase student learning (dependent variable)? The process question (intervening variable) might be: How do they do so” (p. 54)?

The attention to detail was refreshing. I particularly liked the authors’ lists of important works in each chapter for in-depth discussions on a research design, a sampling technique, etc. For example, the authors highlight the foundational works of Robert K. Merton et al. (1990), The Focused Interview, and Krueger and Casey (2009), Focus Groups: A Practical Guide for Applied Research, as must-reads for doing focus group research. My praise could go on, as this book seems to be an important work.

This book does not offer complete information for a thorough understanding of each research design. The lists of references in each chapter refer the researcher to the appropriate literature for more information. Other works from the authors do offer more detailed information about specific methods; such as Vogt et al.’s (2011) a work with Paul J. Baker on the use of comparative case studies instead of randomized controlled trials for program evaluations. Finally, this book ends with an introduction of data coding and measurement for research design decisions. This introduction intends to be a transition for the next volume of work being developed, which details methods use for the data analysis stage (“Tentatively titled When to Use What Analysis Method,” p. 2).

My main critique is that a few topics discussed included thin discussions. The authors do state having limited knowledge in some areas and refer the reader to the literature list to gain more information. For example, Chapter 17—Ethical Issues in Archival Research (pp. 297-305)—resolved into unanswered questions because of the uncertainty that exists in doing archival research. A better approach could be to include additional examples for the three “gray areas” involving flawed data and whistleblowing, good data from a tainted source, and withholding reports to prevent harm. The additional examples would offer clarity and give the readers a better understanding of the complexity involved in maintaining ethical standards in archival research.

The book contributes to the literature in several ways. In most of the chapters, the authors’ highlight gaps in the literature. On page 156, the authors state that there are no known publications on sampling and recruiting for interview research. Further, the authors ask the readers to contact them if they know of any sources. Highlighting such gaps in the literature promotes the advancement of knowledge. An additional contribution includes the authors’ underlying support for research that is guided by a research question or a problem needing investigated. The authors stated:

Although it is not uncommon to hear students say, before they have a research question, “I want to do interview research” or “I’d like to conduct an experiment,” we think that choices among research designs and sampling methods are better made after you have decided on your research topic and question(s), not before (p. 1).

Finally, the third contribution is the book itself. The authors declared that the purpose for writing the book was to fill a gap in reference book and textbook literature on research designs. I recognize how pivotal this book can be in steering researchers towards appropriate information.

Overall, this reference book does what the authors have stated it should do, which is to describe various design packages and define the research designs involved in them. It is obvious from the size of the book that it does not contain all the information, but the authors do a wonderful job at directing researchers towards appropriate literature for reading further details.

 

References

Krueger, R. A. and Casey, M. A. (2009). Focus groups: A practical guide for applied research (4th ed.). Los Angeles, CA: Sage.

Merton, R. K., Fiske, M., and Kendall, P. L. (1990/1956). The focused interview. New York, NY: Free Press.

Vogt, W. P., Gardner, D., Haeffele, L. and Baker, P. (2011). Innovations in program evaluation: Comparative case studies as an alternative to RCTs. In M. Williams & W.P. Vogt (Eds.), The SAGE handbook of innovation in social research methods (pp. 289-320). London: Sage.