What is up with the omission of information in Vogt, Vogt, Gardner, and Haeffele 2014 chapter?

Hello All. I hope you are well today.

I just received a copy of Vogt et al.’s (2014) Selecting the Right Analyses for Your Data: Quantitative, Qualitative and Mixed Methods. I was excited to begin reading this book because I thoroughly enjoyed their last book (When to Use What Research Design) published in 2013 on the use of data collection methods and sampling. This new book is supposed to complete the research process discussed in the first book.

Unfortunately, I am still reading this second book, but felt compelled to make a comment about the omission of information in the subsection of the book, called “Technologies for Recording Observational Data,” on page 119. This section literally throws in a few sentences on visual sociology. The authors define the term and include examples for each section of the definition. The next paragraph stated:

“Visual social research seems underutilized. Rigor in the coding and analysis of visual data does not appear to us to have progressed much beyond nor often attained the levels demonstrated in the classic by Gregory Bateson and Margaret Mead, Balinese Character: A Photographic Analysis, published more than 70 years ago. Mead and Bateson did not simply illustrate, they integrated photographic data into their analyses. [Next paragraph] With the easy availability of photographic technology, for example on cell phones, one might expect visual sociology and anthropology to have become more widespread. Perhaps visual recordings have remained underutilized because of regulations regarding research ethics, particularly the anonymity of research participants…” (pg. 119-120).

WOW. To say the least. The authors refer to the Visual Studies Journal,  the International Visual Sociology Association, Pierre Bourdieu (1965) work, and Douglas Harper’s (1988) article in the journal as official references for anything regarding visual sociology. Having published in the journal and having chaired and presented at the Association’s conferences repeatedly and having met Dr. Harper and other hard working sociologists at one of those conferences, I would have to say that the Vogt, Vogt, Gardner and Haeffele need to seriously apologize for their inaccurate comments and update the information in their book. They singlehandedly disrespected an entire discipline and highlighted just how ill-informed they are even after appearing to do a proper literature review for this section of their book.

At the very least, they could have referenced Ball and Smith’s 1992 book, Analyzing Visual Data, to see that coding and analysis efforts in the 90’s were trying to advance the discipline. What about van Leeuwen and Jewitt’s (2001) Handbook of Visual Analysis? This handbook offers details about “cooking” the data and preparing the visual data for analysis. Even my little article with Dr. Margolis, which was published in the Visual Studies Journal, Fram and Margolis 2011, offers an example of how to code and analyze visual data and how to apply our new coding method, archivization.

Their comment gives off the sense that an entire discipline has not done anything to advance for more than 50 years. At the very least, the section is written in such a way that it lends itself to conveying serious misunderstandings. I can agree that any and all disciplines have their moments in history when they do not advance or they are stuck and not progressing, but to totally discount a group of people who have worked hard to advance visual sociology after 1942 and up to Pierre Bourdieu (1965), then after 1965 and up to 1988 with Harper’s article, then after 1988…Come on.

What the heck…

The authors need to offer some serious clarification and an apology.

I will continue to read this latest book by the authors’ because now I am concerned if more information has been omitted.

I am speechless….


Ball, M.S. and Smith, G.W.H. 1992. Analyzing Visual Data. Newbury Park, CA: Sage Publications.

Bourdieu, P. 1965. Un art moyen: Essai sur les usages sociaux de photographie. Paris: Ed. du Minuit.

Harper, D. 1988. Visual sociology: Expanding sociological vision. American Sociologist, 21, 54-70.

Van Leeuwen, T. and Jewitt, C. 2001. Handbook of Visual Analysis. London: Sage Publications, Ltd.

Vogt, W.P., Vogt, E.R., Gardner, D.C. and Haeffele, L.M. 2014. Selecting the right analyses for your data: quantitative, qualitative, and mixed methods. New York: Guilford Press.

Using quantitative analyses on qualitative data gathered: Is there a recipe for success among the methods used?

I have always been a supporter of the use of the Mixed Methods research methodology. To reiterate from my previous posts, It is all about answering the research questions. The research questions decide what methodology and what methods to use to answer it.

In the 2014 article,”Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review,” by Fakis, Hilliam, Stoneley and Townend, a point is treated inconsequential. Overall, the authors present a strong argument for using quantitative analyses methods on qualitative information to generate new hypotheses and to test theories. This post addresses the inconsequential point that “the quantification of
qualitative information is not related to specific qualitative technique and is not an interest only
for specific type of qualitative researchers” (p. 156).

I have to disagree and state that this topic should be further investigated. Based on my experience with and knowledge of methods use I recognize that a method’s essential process of reducing (versus organizing) data is a key factor in a successful use of quantitative analysis methods to extract macro and meso level patterns from the data. In my 2013 article (posted on my blog somewhere!), I clearly show a complex reduction process for the constant comparative analysis method. Such a process could work as an advantage for using a particular quantitative analysis method. I am not an expert in quantitative analysis, but from my experience if you have thoroughly and effectively reduced the qualitative data during a qualitative analysis, this leads to less variation in the independent variables when you begin to use a quantitative analysis method; automatically, the researcher gets a stronger relationship between the independent and dependent variables. This highlights that the reducing and reorganizing stages of specific qualitative analysis methods make them more suitable for using with specific quantitative analysis methods. In general, the mixed methods methodology is grounded in this logic, but the authors seem to brush off this connection recognized in their literature review.

The authors stated that the content analysis method was commonly used and showed more valid and reliable results when accounting for an acceptable sample size. Their misunderstanding occurs in their passive inclusion of “grounded theory for analyzing” instead of taking into account the strengths of the constant comparative analysis method outside of grounded theory for reducing data. Similar to an algorithm, a researcher works the data by reducing and/or reorganizing following a finite list of well-defined steps. It is logical to assume that specific qualitative analysis methods with more precise reduction processes are better suited for the use of specific quantitative analysis methods.

As I have stated in a previous post, many published journal articles are not offering enough details about the data-coding stage of their research project. This issue easily can contribute to misguided understandings about the preciseness of reduction processes for specific qualitative analysis methods like the content analysis method and the thematic analysis method. Content Analysis has its origins in quantitative research; therefore, it is hardly a jump to be able to use quantitative analyses on qualitative information involving the use of the content analysis method. Traditionally, thematic analysis has been a qualitative method. Recent qualitative analysis software made the jump easier.

The authors stated:

…the statistical analysis of qualitative information was observed more in data derived from content analysis, which is used for extracting objective content from texts for identifying themes and patterns (Hsieh &Shannon, 2005). the type of information extracted from the content analysis could be measured and transformed to quantitative data more regularly than using other methods of qualitative analysis. In six studies content analysis was initially performed for analyzing qualitative data. However, six of the articles in the review used thematic analysis or grounded theory for analyzing the qualitative information. The variety of qualitative methods used before the statistical methods are applied could indicate that the quantification of qualitative information is not relateed to specific qualitative technique and is not an interest only for specific type of qualitative researchers (p. 156).

The authors bring up a valuable point, but treat it with less importance. Future research on methods use in mixed methods research should include an investigation of the data coding stage regarding the initial analysis of qualitative data using qualitative analysis methods and the complimentary use of quantitative analyses methods to gleam new hypotheses or to test a theory. This investigation should focus on the reduction and reorganization processes that occur during analysis and highlight which qualitative analysis methods are more suitable for a follow-up use of particular quantitative analysis methods. The authors’ suggestion for future research is for effort towards developing an “advanced statistical modeling method that will be able to explore the complex relationships arising from the qualitative information” (p. 158). They need to take one step back and look at the connection between the processes of reducing the data between the qualitative and quantitative methods first.

I believe that the authors missed out on an opportunity to investigate a new hypothesis, missed out on setting the stage to influence others to advance the methodology, and missed out on giving mixed methods designs more credit than they did in their article.


Fakis, A., Hilliam, R., Stoneley, H., Townend, M. 2014. Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review. Journal of Mixed Methods Research, 8(2): 139-161.

Hsieh, H. F., & Shannon, S. E. 2005. Three approaches to qualitative content analysis. Qualitative Health Research, 15, 1277-1288.

Destructive Behavior in Evaluations

This posting discusses what some evaluators have experienced as destructive behaviors by stakeholders during evaluation projects.

For example, my experience and the experiences of my associate while evaluating a safety program for a school district is not discussed in our article, “How the School Built Environment Exacerbates Bullying and Peer Harassment (Fram and Dickmann 2012).” Our experiences were not as extreme as we have heard, but have similarities to such experiences by Bechar & Mero-Jaffe (2014), who stated in their article:

Overtly, the program head had agreed to the evaluation and recognized its importance, covertly, his attitude showed inconsistency in his willingness to support the evaluation, which was expressed during the course of the evaluation and in his reaction to the final report; we interpret these as sign of fear of evaluation (p. 369).

We did not interpret our experience as a sign of fear, but a serious issue involving the cohesiveness of district leaders involving the superintendent and all of the principals of the schools. Our evidence pointed towards the leadership style of the superintendent and the politics embedded in the school system. We even stated in our article that politics at the district level had a major negative impact on safety program at the schools. If any fear of evaluation from the stakeholders existed if was purely concerning the possible loss of their job. Any fear we witnessed was a symptom of the dysfunctional relationships among the stakeholders.

Bechar & Mero-Jaffe refer to Donaldson’s (2007) introduction of a new term, excessive evaluation anxiety (XEA). I am inclined to say that our experiences had little connection if any to this phenomenon.

I am proposing two ways to lower the possibilities of experiencing destructive behavior from stakeholders.

1. More thoroughly plan the initial stakeholder meeting and incorporate an informative session (whether on another day or same day) to discuss the perceptions of and concerns of all of the stakeholders involved. Meet with individual stakeholders afterwards to further address their concerns in private.

We did have an informative and well-planned meeting with the stakeholders together and my associate and I both met separately with each stakeholder to address any concerns that they had. For the most part, we believe that the individual meetings improved our chances of collecting data effectively with support from the stakeholders. One of the principals wanted to be involved and we made this happen even though the superintendent did not want any principal involved at any stage. We asked the principal to help us orchestrate the informed consent meeting involving the teachers and I asked the principal to walk with me as I took photographs to offer me insight about particular spaces of the school. My associate was able to show the superintendent the benefit of having the principal involved.

2. Before the initial stakeholder meeting schedule and orchestrate a focus group designed from a social constructivist perspective or what Ryan, Gandha et al. (2014) called a Type B focus groupThe style of focus group reveals tacit knowledge during social participation. This focus group is effective at getting at what is hidden during social interactions. With targeted questions intermingled among conversation, a moderator, can tease out underlying politics, beliefs and facades of social relationships. In addition, targeted and informative comments by the moderator can help to better inform the stakeholders about the evaluation process and eliminate confusion and misunderstandings. Ultimately, this additional data adds richness.

I believe that had we completed such a focus group with all of the stakeholders, we would have had a more effective evaluation; which would have benefited all of the stakeholders. We were not able to collect all of the necessary data needed to complete a thorough evaluation of the safety program in all of the schools. We were only allowed to focus on the elementary schools for collecting data.

I digress:

The Bechar & Mero-Jaffe article does concern me. As an outsider regarding the experiences, I have to say that while reading the article, I felt that the authors were still too sensitive about their experiences. Their description and choice of information to present regarding the program head as one of the stakeholders was unsettling to me. I question whether or not this article should have been published at all.

Regarding the use of the term, excessive evaluation anxiety, I have to add that just because destructive behavior exists does not mean that it always points to some anxiety about evaluation. In our case, the destructive behavior pointed to a dysfunctional relationship among the stakeholders and how such relationships had a negative impact on the safety program.



Bechar, Shlomit & Mero-Jaffe, Irit. (2014) Who is afraid of evaluation? Ethics in evaluation research as a way to cope with excessive evaluation anxiety: Insights from a case study. American Journal of Evaluation, 35(3): 364-376.

Donaldson, S. I. (2007). Program theory-driven evaluation science: strategies and applications. London, England: Routledge.

Ryan, K. E., Gandha, T., Culbertson, K. J. and Carlson, C. (2014). Focus group evidence: Implications for design and analysis. American Journal of Evaluation, 35(3): 328-345.


What is “Big Data?”


I have been talking with folks in various fields of research. One term that has caught my interest is different understandings that people have for the term, “big data.”

I chatted with a friend of mine, recently, about his job interview experience. The confusion was terrible. My friend, an economist, was being asked questions about his experience with big data by a Manager, with an engineering background. The Manager’s understanding of big data was similar to the understanding that most IT folks have; whereas, my friend’s understanding the term was similar to what many economists have. Needless to say, neither of them understood what the other person was trying to say. The interview was clouded with confusion.

When talking with several economists, their understanding was consistent among them that big data is really reference to high-frequency data, or real-time data continuously being collected. For example, financial data on the activities on stock market for a specific company is collected every second. Or household data that is collected every 15 minutes for electricity usage.

When talking with several IT people, their understanding was consistent among them that big data was huge data sets that are too large to manipulate with standard methods or tools. This doesn’t necessarily refer to high-frequency data.

Even Professors Ward and Barker at the University of St. Andrews in Scotland found out just how confusing the use of the term has become. http://www.technologyreview.com/view/519851/the-big-data-conundrum-how-to-define-it/      They list these definitions:

1. Gartner. In 2001, a Meta (now Gartner) report noted the increasing size of data, the increasing rate at which it is produced and the increasing range of formats and representations employed. This report predated the term “dig data” but proposed a three-fold definition encompassing the “three Vs”: Volume, Velocity and Variety.This idea has since become popular and sometimes includes a fourth V: veracity, to cover questions of trust and uncertainty.

2. Oracle. Big data is the derivation of value from traditional relational database-driven business decision making, augmented with new sources of unstructured data.

3. Intel. Big data opportunities emerge in organizations generating a median of 300 terabytes of data a week. The most common forms of data analyzed in this way are business transactions stored in relational databases, followed by documents, e-mail, sensor data, blogs, and social media.

4. Microsoft. “Big data is the term increasingly used to describe the process of applying serious computing power—the latest in machine learning and artificial intelligence—to seriously massive and often highly complex sets of information.”

5. The Method for an Integrated Knowledge Environment open-source project. The MIKE project argues that big data is not a function of the size of a data set but its complexity. Consequently, it is the high degree of permutations and interactions within a data set that defines big data.

6. The National Institute of Standards and Technology. NIST argues that big data is data which “exceed(s) the capacity or capability of current or conventional methods and systems.” In other words, the notion of “big” is relative to the current standard of computation.

In the field of education, the use of the term, big data, is convoluted with understandings about  learning analytics(LA), or collecting a large amount of learner-produced data use to predict future learning, and data-mining (EDM) focused on developing and improving methods for extracting meaning from large data sets with data on learning in educational settings. EDM originated from tutoring paradigms and systems; whereas, LA originated from learning management systems.

When talking with two sociologists, their understanding was more inline with the IT peoples’ understanding–that big data concerned huge data sets that required substantial computing power to extract any meaning from the data. One sociologist referred to her use of a huge data set gathered from tweets on Twitter within a short time span on one day.

Several points are clear. 1.) We have too much confusion going on regarding the definition of the term, big data. 2.) Each field of study is going to have to come up with their own term to use because the use of “big data” causes too much confusion across the board. 3.) Each field needs to work on resolving the confusion that exists within their own field regarding the proper terms to use. Finally, 4.) Academic Journal Boards and Editors must require authors to use the most appropriate term in their publications. For example, use “high-frequency data” instead of “big data.”

Research versus Evaluation

In response to a recent blog post by BetterEvaluation: Week 19: Ways of framing the difference between research and evaluation

I appreciate the precise details offered in the blog. Such information offers clarity that is sometimes assumed. Where I feel that more clarity is needed is in the use of the term, “category” to differentiate and identify the terms, “research” and “evaluation.” The differences between research and evaluation point towards research and evaluation being different methodologies.To reiterate, I continue to follow the definition for the term, methodology, as used by Strauss and Corbin (1998): “a way of thinking about and studying social reality” (p. 3). Some people will use the term, paradigm, instead of methodology. The BetterEvaluation blogger writing the blog states that research and evaluation can be understood as a dichotomy. I disagree that research and evaluation are dichotomous, but more so can be viewed as methodologies along a continuum highlighting the change in distance between the similarities because of the differences. In addition, the blogger’s effort to show various connections  and the supportive qualities of the methodologies for each other seems to contradict the use of the term, “dichotomy.” As independent methodologies, they can overlap, during the data collection and data analysis processes regarding the type of methods being used; as well as, overlapping during other components of a study. Such overlapping always occurs because of the type of research question being asked. Research questions focusing on investigating programs can include an investigation on social processes. Research questions focusing on investigating social processes can include an investigation of the connection of a social process to the impact of a policy or program. These are just two examples of many.

To conclude, I believe that the discussion of the differences between the methodologies of research and evaluation should begin with the emphasis of the research/study questions as the ultimate element/the point of origin for deciding which methodology is of concern for a study and how these two methodologies interact in a study.


The Problem of Missing Information for Qualitative, Data-Coding Methods in Published Articles

I have begun the arduous task of weeding through large collections of qualitative research literature to find out the answer to one question. I will publish a commentary in the near future on this problem of missing information for qualitative, data-coding methods in published articles. Many I have talked with have made comments about how journal editors love to make information short and sweet or less important information adequately summarized. I am glad to share your comments and include them in my commentary per your permission to do so.

(Excerpts from  my  commentary draft)

“As one looks through the abundant literature on particular qualitative research studies one can find a small percentage of publications that walk the reader through the precise steps of coding several kinds of data (e.g. Miles and Huberman 1994; Patton 2002; Saldaña 2013). Overall, most descriptions of a coding process had missing information. This leads to the focus of this commentary, which is finding out why a huge lack of details exists within qualitative research literature regarding the specific and precise details for data-coding methods. While continuing my review of the literature, I began to search for an answer as to why such a lack of information existed at all. Jochen Gläser and Grit Laudel 2013 had discussed this omission as well, stating:

…many qualitative methods claim to lead to an answer to the research question but do not specify all the steps between the text and the answer. This is not surprising because qualitative research places heavy emphasis on interpretation. Interpretation is an ill-structured activity for which no algorithm can be provided. At the same time, the widespread reluctance to define intermediary steps and their outputs makes it often difficult to assess the contribution of a specific method along the way from texts to answers to research questions and the quality of that contribution (para 3).

Gläser and Laudel (2013) highlight that the quality and trustworthiness of qualitative studies is being called into question because of such “widespread reluctance.” I contacted Dr. Gläser via email to request further clarification on their use of the term “reluctance” to describe the cause of the situation and to share my understanding of the cause of the situation as “widespread taken-for-grantedness.” Dr. Gläser (to paraphrase) stated that they understood “reluctance” to include “taken-for-grantedness,” through “inability” to “unwillingness.” He gave me permission to share his comment, and as such he stated:

1.”Much qualitative research produces descriptions of phenomena in the light of different ideas or frames of reference. Each of the purposes of applying a method is different, which makes the application of the method look different. Thus, the application and description of methods is made difficult because “the qualitative paradigm” cannot agree on a limited set of types of purposes and link them to the choice of methods. Qualitative research doesn’t have a methodology that links types of research aims to types of methods” (Dr. Gläser, email correspondence, April 25th, 2014).

2.”Important and frequently used methods such as coding are incompletely described. The internet is full of questions of the kind ‘I have my codes. What do I do now?’ People (particularly students) just don’t know, and cannot learn it because neither the original developers of a method nor their followers ever cared to describe the method fully. If you look at textbooks, you will usually find very general descriptions followed by very specific examples. The middle ground (which is the most important for learners) is just not covered. As a result, many researchers do something they would rather not put in writing. Also as the result, they do not know whether they did the right thing, and don’t dare describe it in their publications. I think this is the most important factor preventing progress” (Dr. Gläser, email correspondence, April 25th, 2014).

In agreement with Dr. Gläser, I add that a lot of authors refer the reader to the original authors of a method and their publication introducing the method, and then offer a short summary of the method. Little is said about any situations where the researcher had to adapt and make changes during the process of performing the method. It is as if the researcher perfectly followed every step without any variations or changes. Those articles that do admit to having experienced any variations, quickly summarize such experiences.

In keeping with the desire of most qualitative researchers, some growing pains must occur and one is warranted at this stage because I find it unsettling that at this stage in advancement in qualitative research methods use that I can still easily find articles that leave out important details about such processes as the use of  data-coding methods.

Dr. Gläser further commented:

“I think the only way forward would be the advocates of the various methods agreeing on a standard for describing them in publications and the editors of journals demanding such descriptions. Their reluctance to do so points to a much more fundamental question: Do qualitative researchers believe or not believe that methods and the quality of their applications make a difference to their results?” (Email correspondence April 25th, 2014).

The purposes of my commentary are to address this widespread reluctance or taken-for-grantedness and to answer the question, “Why does a lack of details exist within qualitative research literature on data-coding methods use?”

I thank Dr. Gläser for his insight and honesty. I hope that we can resolve this problem quickly and neatly, without unnecessary fussiness, so as to be able to move on to other pressing problems in the field.