Taking some time…

Hello to all out there doing research and evaluations!

I hope you are feeling well. I thank those who have repeatedly returned to my blog to gain some insight or information. I hope that you keep coming back for more. Right on schedule! I notice an increase in viewers about this time–towards the end of the semester–I wish all of you students good luck on your theses and dissertations! Remember, start with the research question FIRST, then decide what methodology and methods to use based on the question.

It is the day before Halloween and instead of doing some serious examination or completing some complex evaluation, I have decided to take some time to be less serious. I will post my next blog posting at the beginning of January 2015. Besides working, I plan to spend more quality time with my family, enjoy Halloween with my child, take the time to vote next week, enjoy Thanksgiving with my family, enjoy the Winter Solstice and enjoy all the other holidays in the months of November and December.  Being from Generation X, we have a tendency to work hard and play harder. I have to add one of my favorite movie quotes from the character Ferris Bueller, “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it” (John Hughes, 1986). It is that time again for me to stop and look or observe. I wish all of you a safe and happy holiday season. Please remember to donate food, time or money to help those humans and animals in need. Happy Holidays! Cheers!

What is up with the omission of information in Vogt, Vogt, Gardner, and Haeffele 2014 chapter?

Hello All. I hope you are well today.

I just received a copy of Vogt et al.’s (2014) Selecting the Right Analyses for Your Data: Quantitative, Qualitative and Mixed Methods. I was excited to begin reading this book because I thoroughly enjoyed their last book (When to Use What Research Design) published in 2013 on the use of data collection methods and sampling. This new book is supposed to complete the research process discussed in the first book.

Unfortunately, I am still reading this second book, but felt compelled to make a comment about the omission of information in the subsection of the book, called “Technologies for Recording Observational Data,” on page 119. This section literally throws in a few sentences on visual sociology. The authors define the term and include examples for each section of the definition. The next paragraph stated:

“Visual social research seems underutilized. Rigor in the coding and analysis of visual data does not appear to us to have progressed much beyond nor often attained the levels demonstrated in the classic by Gregory Bateson and Margaret Mead, Balinese Character: A Photographic Analysis, published more than 70 years ago. Mead and Bateson did not simply illustrate, they integrated photographic data into their analyses. [Next paragraph] With the easy availability of photographic technology, for example on cell phones, one might expect visual sociology and anthropology to have become more widespread. Perhaps visual recordings have remained underutilized because of regulations regarding research ethics, particularly the anonymity of research participants…” (pg. 119-120).

WOW. To say the least. The authors refer to the Visual Studies Journal,  the International Visual Sociology Association, Pierre Bourdieu (1965) work, and Douglas Harper’s (1988) article in the journal as official references for anything regarding visual sociology. Having published in the journal and having chaired and presented at the Association’s conferences repeatedly and having met Dr. Harper and other hard working sociologists at one of those conferences, I would have to say that the Vogt, Vogt, Gardner and Haeffele need to seriously apologize for their inaccurate comments and update the information in their book. They singlehandedly disrespected an entire discipline and highlighted just how ill-informed they are even after appearing to do a proper literature review for this section of their book.

At the very least, they could have referenced Ball and Smith’s 1992 book, Analyzing Visual Data, to see that coding and analysis efforts in the 90’s were trying to advance the discipline. What about van Leeuwen and Jewitt’s (2001) Handbook of Visual Analysis? This handbook offers details about “cooking” the data and preparing the visual data for analysis. Even my little article with Dr. Margolis, which was published in the Visual Studies Journal, Fram and Margolis 2011, offers an example of how to code and analyze visual data and how to apply our new coding method, archivization.

Their comment gives off the sense that an entire discipline has not done anything to advance for more than 50 years. At the very least, the section is written in such a way that it lends itself to conveying serious misunderstandings. I can agree that any and all disciplines have their moments in history when they do not advance or they are stuck and not progressing, but to totally discount a group of people who have worked hard to advance visual sociology after 1942 and up to Pierre Bourdieu (1965), then after 1965 and up to 1988 with Harper’s article, then after 1988…Come on.

What the heck…

The authors need to offer some serious clarification and an apology.

I will continue to read this latest book by the authors’ because now I am concerned if more information has been omitted.

I am speechless….

References:

Ball, M.S. and Smith, G.W.H. 1992. Analyzing Visual Data. Newbury Park, CA: Sage Publications.

Bourdieu, P. 1965. Un art moyen: Essai sur les usages sociaux de photographie. Paris: Ed. du Minuit.

Harper, D. 1988. Visual sociology: Expanding sociological vision. American Sociologist, 21, 54-70.

Van Leeuwen, T. and Jewitt, C. 2001. Handbook of Visual Analysis. London: Sage Publications, Ltd.

Vogt, W.P., Vogt, E.R., Gardner, D.C. and Haeffele, L.M. 2014. Selecting the right analyses for your data: quantitative, qualitative, and mixed methods. New York: Guilford Press.

Using quantitative analyses on qualitative data gathered: Is there a recipe for success among the methods used?

I have always been a supporter of the use of the Mixed Methods research methodology. To reiterate from my previous posts, It is all about answering the research questions. The research questions decide what methodology and what methods to use to answer it.

In the 2014 article,”Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review,” by Fakis, Hilliam, Stoneley and Townend, a point is treated inconsequential. Overall, the authors present a strong argument for using quantitative analyses methods on qualitative information to generate new hypotheses and to test theories. This post addresses the inconsequential point that “the quantification of
qualitative information is not related to specific qualitative technique and is not an interest only
for specific type of qualitative researchers” (p. 156).

I have to disagree and state that this topic should be further investigated. Based on my experience with and knowledge of methods use I recognize that a method’s essential process of reducing (versus organizing) data is a key factor in a successful use of quantitative analysis methods to extract macro and meso level patterns from the data. In my 2013 article (posted on my blog somewhere!), I clearly show a complex reduction process for the constant comparative analysis method. Such a process could work as an advantage for using a particular quantitative analysis method. I am not an expert in quantitative analysis, but from my experience if you have thoroughly and effectively reduced the qualitative data during a qualitative analysis, this leads to less variation in the independent variables when you begin to use a quantitative analysis method; automatically, the researcher gets a stronger relationship between the independent and dependent variables. This highlights that the reducing and reorganizing stages of specific qualitative analysis methods make them more suitable for using with specific quantitative analysis methods. In general, the mixed methods methodology is grounded in this logic, but the authors seem to brush off this connection recognized in their literature review.

The authors stated that the content analysis method was commonly used and showed more valid and reliable results when accounting for an acceptable sample size. Their misunderstanding occurs in their passive inclusion of “grounded theory for analyzing” instead of taking into account the strengths of the constant comparative analysis method outside of grounded theory for reducing data. Similar to an algorithm, a researcher works the data by reducing and/or reorganizing following a finite list of well-defined steps. It is logical to assume that specific qualitative analysis methods with more precise reduction processes are better suited for the use of specific quantitative analysis methods.

As I have stated in a previous post, many published journal articles are not offering enough details about the data-coding stage of their research project. This issue easily can contribute to misguided understandings about the preciseness of reduction processes for specific qualitative analysis methods like the content analysis method and the thematic analysis method. Content Analysis has its origins in quantitative research; therefore, it is hardly a jump to be able to use quantitative analyses on qualitative information involving the use of the content analysis method. Traditionally, thematic analysis has been a qualitative method. Recent qualitative analysis software made the jump easier.

The authors stated:

…the statistical analysis of qualitative information was observed more in data derived from content analysis, which is used for extracting objective content from texts for identifying themes and patterns (Hsieh &Shannon, 2005). the type of information extracted from the content analysis could be measured and transformed to quantitative data more regularly than using other methods of qualitative analysis. In six studies content analysis was initially performed for analyzing qualitative data. However, six of the articles in the review used thematic analysis or grounded theory for analyzing the qualitative information. The variety of qualitative methods used before the statistical methods are applied could indicate that the quantification of qualitative information is not relateed to specific qualitative technique and is not an interest only for specific type of qualitative researchers (p. 156).

The authors bring up a valuable point, but treat it with less importance. Future research on methods use in mixed methods research should include an investigation of the data coding stage regarding the initial analysis of qualitative data using qualitative analysis methods and the complimentary use of quantitative analyses methods to gleam new hypotheses or to test a theory. This investigation should focus on the reduction and reorganization processes that occur during analysis and highlight which qualitative analysis methods are more suitable for a follow-up use of particular quantitative analysis methods. The authors’ suggestion for future research is for effort towards developing an “advanced statistical modeling method that will be able to explore the complex relationships arising from the qualitative information” (p. 158). They need to take one step back and look at the connection between the processes of reducing the data between the qualitative and quantitative methods first.

I believe that the authors missed out on an opportunity to investigate a new hypothesis, missed out on setting the stage to influence others to advance the methodology, and missed out on giving mixed methods designs more credit than they did in their article.

References

Fakis, A., Hilliam, R., Stoneley, H., Townend, M. 2014. Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review. Journal of Mixed Methods Research, 8(2): 139-161.

Hsieh, H. F., & Shannon, S. E. 2005. Three approaches to qualitative content analysis. Qualitative Health Research, 15, 1277-1288.

Destructive Behavior in Evaluations

This posting discusses what some evaluators have experienced as destructive behaviors by stakeholders during evaluation projects.

For example, my experience and the experiences of my associate while evaluating a safety program for a school district is not discussed in our article, “How the School Built Environment Exacerbates Bullying and Peer Harassment (Fram and Dickmann 2012).” Our experiences were not as extreme as we have heard, but have similarities to such experiences by Bechar & Mero-Jaffe (2014), who stated in their article:

Overtly, the program head had agreed to the evaluation and recognized its importance, covertly, his attitude showed inconsistency in his willingness to support the evaluation, which was expressed during the course of the evaluation and in his reaction to the final report; we interpret these as sign of fear of evaluation (p. 369).

We did not interpret our experience as a sign of fear, but a serious issue involving the cohesiveness of district leaders involving the superintendent and all of the principals of the schools. Our evidence pointed towards the leadership style of the superintendent and the politics embedded in the school system. We even stated in our article that politics at the district level had a major negative impact on safety program at the schools. If any fear of evaluation from the stakeholders existed if was purely concerning the possible loss of their job. Any fear we witnessed was a symptom of the dysfunctional relationships among the stakeholders.

Bechar & Mero-Jaffe refer to Donaldson’s (2007) introduction of a new term, excessive evaluation anxiety (XEA). I am inclined to say that our experiences had little connection if any to this phenomenon.

I am proposing two ways to lower the possibilities of experiencing destructive behavior from stakeholders.

1. More thoroughly plan the initial stakeholder meeting and incorporate an informative session (whether on another day or same day) to discuss the perceptions of and concerns of all of the stakeholders involved. Meet with individual stakeholders afterwards to further address their concerns in private.

We did have an informative and well-planned meeting with the stakeholders together and my associate and I both met separately with each stakeholder to address any concerns that they had. For the most part, we believe that the individual meetings improved our chances of collecting data effectively with support from the stakeholders. One of the principals wanted to be involved and we made this happen even though the superintendent did not want any principal involved at any stage. We asked the principal to help us orchestrate the informed consent meeting involving the teachers and I asked the principal to walk with me as I took photographs to offer me insight about particular spaces of the school. My associate was able to show the superintendent the benefit of having the principal involved.

2. Before the initial stakeholder meeting schedule and orchestrate a focus group designed from a social constructivist perspective or what Ryan, Gandha et al. (2014) called a Type B focus groupThe style of focus group reveals tacit knowledge during social participation. This focus group is effective at getting at what is hidden during social interactions. With targeted questions intermingled among conversation, a moderator, can tease out underlying politics, beliefs and facades of social relationships. In addition, targeted and informative comments by the moderator can help to better inform the stakeholders about the evaluation process and eliminate confusion and misunderstandings. Ultimately, this additional data adds richness.

I believe that had we completed such a focus group with all of the stakeholders, we would have had a more effective evaluation; which would have benefited all of the stakeholders. We were not able to collect all of the necessary data needed to complete a thorough evaluation of the safety program in all of the schools. We were only allowed to focus on the elementary schools for collecting data.

I digress:

The Bechar & Mero-Jaffe article does concern me. As an outsider regarding the experiences, I have to say that while reading the article, I felt that the authors were still too sensitive about their experiences. Their description and choice of information to present regarding the program head as one of the stakeholders was unsettling to me. I question whether or not this article should have been published at all.

Regarding the use of the term, excessive evaluation anxiety, I have to add that just because destructive behavior exists does not mean that it always points to some anxiety about evaluation. In our case, the destructive behavior pointed to a dysfunctional relationship among the stakeholders and how such relationships had a negative impact on the safety program.

 

References:

Bechar, Shlomit & Mero-Jaffe, Irit. (2014) Who is afraid of evaluation? Ethics in evaluation research as a way to cope with excessive evaluation anxiety: Insights from a case study. American Journal of Evaluation, 35(3): 364-376.

Donaldson, S. I. (2007). Program theory-driven evaluation science: strategies and applications. London, England: Routledge.

Ryan, K. E., Gandha, T., Culbertson, K. J. and Carlson, C. (2014). Focus group evidence: Implications for design and analysis. American Journal of Evaluation, 35(3): 328-345.

 

What is “Big Data?”

Recently,

I have been talking with folks in various fields of research. One term that has caught my interest is different understandings that people have for the term, “big data.”

I chatted with a friend of mine, recently, about his job interview experience. The confusion was terrible. My friend, an economist, was being asked questions about his experience with big data by a Manager, with an engineering background. The Manager’s understanding of big data was similar to the understanding that most IT folks have; whereas, my friend’s understanding the term was similar to what many economists have. Needless to say, neither of them understood what the other person was trying to say. The interview was clouded with confusion.

When talking with several economists, their understanding was consistent among them that big data is really reference to high-frequency data, or real-time data continuously being collected. For example, financial data on the activities on stock market for a specific company is collected every second. Or household data that is collected every 15 minutes for electricity usage.

When talking with several IT people, their understanding was consistent among them that big data was huge data sets that are too large to manipulate with standard methods or tools. This doesn’t necessarily refer to high-frequency data.

Even Professors Ward and Barker at the University of St. Andrews in Scotland found out just how confusing the use of the term has become. http://www.technologyreview.com/view/519851/the-big-data-conundrum-how-to-define-it/      They list these definitions:

1. Gartner. In 2001, a Meta (now Gartner) report noted the increasing size of data, the increasing rate at which it is produced and the increasing range of formats and representations employed. This report predated the term “dig data” but proposed a three-fold definition encompassing the “three Vs”: Volume, Velocity and Variety.This idea has since become popular and sometimes includes a fourth V: veracity, to cover questions of trust and uncertainty.

2. Oracle. Big data is the derivation of value from traditional relational database-driven business decision making, augmented with new sources of unstructured data.

3. Intel. Big data opportunities emerge in organizations generating a median of 300 terabytes of data a week. The most common forms of data analyzed in this way are business transactions stored in relational databases, followed by documents, e-mail, sensor data, blogs, and social media.

4. Microsoft. “Big data is the term increasingly used to describe the process of applying serious computing power—the latest in machine learning and artificial intelligence—to seriously massive and often highly complex sets of information.”

5. The Method for an Integrated Knowledge Environment open-source project. The MIKE project argues that big data is not a function of the size of a data set but its complexity. Consequently, it is the high degree of permutations and interactions within a data set that defines big data.

6. The National Institute of Standards and Technology. NIST argues that big data is data which “exceed(s) the capacity or capability of current or conventional methods and systems.” In other words, the notion of “big” is relative to the current standard of computation.

In the field of education, the use of the term, big data, is convoluted with understandings about  learning analytics(LA), or collecting a large amount of learner-produced data use to predict future learning, and data-mining (EDM) focused on developing and improving methods for extracting meaning from large data sets with data on learning in educational settings. EDM originated from tutoring paradigms and systems; whereas, LA originated from learning management systems.

When talking with two sociologists, their understanding was more inline with the IT peoples’ understanding–that big data concerned huge data sets that required substantial computing power to extract any meaning from the data. One sociologist referred to her use of a huge data set gathered from tweets on Twitter within a short time span on one day.

Several points are clear. 1.) We have too much confusion going on regarding the definition of the term, big data. 2.) Each field of study is going to have to come up with their own term to use because the use of “big data” causes too much confusion across the board. 3.) Each field needs to work on resolving the confusion that exists within their own field regarding the proper terms to use. Finally, 4.) Academic Journal Boards and Editors must require authors to use the most appropriate term in their publications. For example, use “high-frequency data” instead of “big data.”

Research versus Evaluation

In response to a recent blog post by BetterEvaluation: Week 19: Ways of framing the difference between research and evaluation

I appreciate the precise details offered in the blog. Such information offers clarity that is sometimes assumed. Where I feel that more clarity is needed is in the use of the term, “category” to differentiate and identify the terms, “research” and “evaluation.” The differences between research and evaluation point towards research and evaluation being different methodologies.To reiterate, I continue to follow the definition for the term, methodology, as used by Strauss and Corbin (1998): “a way of thinking about and studying social reality” (p. 3). Some people will use the term, paradigm, instead of methodology. The BetterEvaluation blogger writing the blog states that research and evaluation can be understood as a dichotomy. I disagree that research and evaluation are dichotomous, but more so can be viewed as methodologies along a continuum highlighting the change in distance between the similarities because of the differences. In addition, the blogger’s effort to show various connections  and the supportive qualities of the methodologies for each other seems to contradict the use of the term, “dichotomy.” As independent methodologies, they can overlap, during the data collection and data analysis processes regarding the type of methods being used; as well as, overlapping during other components of a study. Such overlapping always occurs because of the type of research question being asked. Research questions focusing on investigating programs can include an investigation on social processes. Research questions focusing on investigating social processes can include an investigation of the connection of a social process to the impact of a policy or program. These are just two examples of many.

To conclude, I believe that the discussion of the differences between the methodologies of research and evaluation should begin with the emphasis of the research/study questions as the ultimate element/the point of origin for deciding which methodology is of concern for a study and how these two methodologies interact in a study.