Wow 51 authors referenced my 2013 article!

Fram Google Citations for 2013 article


Hello Folks! It has been a while. I hope that everyone is having an enjoyable summer. I did a lot of traveling this summer.

I just noticed that my 2013 article, “The Constant Comparative Analysis Method Outside of Grounded Theory,” has been referenced 51 times so far. Though I appreciate the support and send a big thank you to those who have referenced my work, I have to add something.

While reviewing the articles and dissertations that cited my article, I saw a pattern that concerned me. At least 9 of those citing my article misquoted me. I won’t pick at particular articles, but you can go through the list to find the examples. A few state that my use of the CCA method as a “technique.” While I clearly state that early on CCA was called a technique by a few researchers. I never stated that I was using a technique. I used a method.

Another small group of authors make statements about their use of Grounded Theory as a guiding methodology and proceed to include me in a list of other well-known authors who use GT. If you read the title of my article, it clearly states that I am using the CCA method OUTSIDE of GT. Geez! One dissertation makes this mistake. I am seriously concerned about the ability of those who were on the dissertation committee to guide novice researchers!

Overall, many of the publications citing my 2013 article correctly highlight the use of my innovative model for the CCA method to help them collect and analyze their data. I appreciate you “getting it.”  Still others highlight my article in contrast to the previous articles published on the use of GT. You all are awesome!!!!

I could sit and ponder why some authors incorrectly cited my article. I probably would be disappointed in them. I suspect that I would reference a previous post of mine complaining or calling out a serious problem in academia. But the summer is passing and I would like to enjoy what is left of it.

Please read through the list of citations for my 2013 article. I would appreciate your insight into what I observed.

I wish you well. Until next time….

Critical discourse analysis…what’s the word currently?

One of my areas of expertise is critical discourse analysis (CDA), I credit the amazing professors I had during my masters and doctoral programs. Over the years, I have been able to develop a “gut feeling,” a sense, that something is hidden from me. Everyday interactions have become out of the ordinary for me. Attending group events has been amazingly exciting for me, in a geekish way. It is awesome to be able to “read” between the lines and see practices that stand out to me, but not to others.

A simplest understanding is that critical discourse analysis (CDA) shows us that language is a form of social practice. At the micro level, an individual writes and speaks beliefs and norms that have been imposed upon said individual. at first glance, we cannot recognize that those spoken beliefs are not anything more than individual. At the macro level, groups of people (also individuals) come together to write and speak beliefs and norms with such emphasis that these beliefs manifest as taken-for-granted common sense and dominant practices. This is the age old understanding of the power of a person vs the power of the people, liberalism vs. socialism, etc.

van Dijk (1993) stated:

Dominance is defined here as the exercise of social power by elites, institutions or groups, that results in social inequality, including political, cultural, class, ethnic, racial and gender inequality. This reproduction process may involve such different modes of discourse power relations as the more or less direct or overt support. enactment, representation, legitimation, denial, mitigation or concealment of dominance, among others. More specifically, critical discourse analysts want to know what structures, strategies or other properties of text, talk, verbal interaction or communicative events play a role in these modes of reproduction (p. 249-250).

This still is an accurate statement, one that continues to highlight historically that changes and exchanges of power from one dominant group to the next. What the analysis comes down to is to identify a dominant ideology and how is manifests itself through discourse that is seen through text, talk, images, actions and social practices.

Fairclough, Mulderrig, and Wodak (2011) highlighted how CDA has evolved as a field in the social sciences:

CDA is not a discreet academic discipline with a relatively fixed set of research methods. Instead, we might best see CDA as a problem-oriented interdisciplinary research movement, subsuming a variety of approaches, each with different theoretical models, research methods and agenda. What unites them is a shared interest in the semiotic dimensions of power, injustice, abuse, and political-economic or cultural change in society (p. 357).

CDA works to break down and dismantle the social (e.g. gender, sexual orientation, race), political (e.g. conservatism, liberalism, socialism), cultural (e.g. American, European), economic (e.g. globalization, consumer cultures), epistemological (e.g. schooling, culture of assessment) and other dominant ideologies to identify, see, recognize the discourses (e.g. text, talk, social actions, social practices) that support them.

Fairclough, Mulderrig, and Wodak (2011) break down a typical process for using CDA. They stated:

Unlike some forms of discourse-based research, CDA does not begin with a fixed theoretical and methodological stance. Instead, the CDA research process begins with a research topic; for example, racism, democratic participation, Middle East politics, globalization, workplace literacy, consumer cultures, and so forth. Methodology is the process during which, informed through theory, this topic is further refined so as to construct the objects of research (pinpointing specific foci and research questions). The choice of appropriate methods (data collection and mode of analysis) depends on what one is investigating. Thus, for example, it is likely that a different set of analytical and theoretical tools will be required to investigate neoliberal ideology in welfare policy from those needed to explore workplace sectarianism in Northern Ireland.  (p. 358-359).

What guides an analyst is an interest and/or stake in a particular topic and the presence of some form of dominance and/or power shifts occurring. For example, as a female, I have a stake in how conservative Republicans in the United States dictates what power I have over my own body. I could investigate the origins of this political ideology and its connection to a specific gender ideology.

From what I have read, CDA is still going strong as a field. What concerns me is the frequency of its use in times when dominant ideologies are powerful enough to stay “hidden” and have the ability to force the blocking of any attempt to dismantle it (e.g. court cases where powerful organizations are fighting the use of freedom of speech; the omission of information in politics that never is realized). Such power is evil and of major concern for the people who have become oblivious to such efforts. I ask you to stop, look and listen this week. Begin to learn how to “read” and “see” what has been hidden to you because of who you are.


Fairclough, N., Mulderrig, J., and Wodak, R. (2011). “Critical discourse analysis.” In T. A. van Dijk (ed.) Discourse Studies: A multidisciplinary introduction. London: SAGE, 357+.

van Dijk, T.A. (1993). “Principles of critical discourse analysis.” Discourse & Society, 4(2): 249-283. London: SAGE.

I am back!

Well, hello!

I hope your holidays were safe and fun. My employment involves developing training programs; including curriculum development, learning outcomes assessments, and course and program evaluations. This year I will be posting less frequently. Please stay tuned for some posts on the methods to use when analyzing curriculum, developing more effective course evaluations and other relevant topics. My next official post will be in February. Cheers!

Taking some time…

Hello to all out there doing research and evaluations!

I hope you are feeling well. I thank those who have repeatedly returned to my blog to gain some insight or information. I hope that you keep coming back for more. Right on schedule! I notice an increase in viewers about this time–towards the end of the semester–I wish all of you students good luck on your theses and dissertations! Remember, start with the research question FIRST, then decide what methodology and methods to use based on the question.

It is the day before Halloween and instead of doing some serious examination or completing some complex evaluation, I have decided to take some time to be less serious. I will post my next blog posting at the beginning of January 2015. Besides working, I plan to spend more quality time with my family, enjoy Halloween with my child, take the time to vote next week, enjoy Thanksgiving with my family, enjoy the Winter Solstice and enjoy all the other holidays in the months of November and December.  Being from Generation X, we have a tendency to work hard and play harder. I have to add one of my favorite movie quotes from the character Ferris Bueller, “Life moves pretty fast. If you don’t stop and look around once in a while, you could miss it” (John Hughes, 1986). It is that time again for me to stop and look or observe. I wish all of you a safe and happy holiday season. Please remember to donate food, time or money to help those humans and animals in need. Happy Holidays! Cheers!

What is up with the omission of information in Vogt, Vogt, Gardner, and Haeffele 2014 chapter?

Hello All. I hope you are well today.

I just received a copy of Vogt et al.’s (2014) Selecting the Right Analyses for Your Data: Quantitative, Qualitative and Mixed Methods. I was excited to begin reading this book because I thoroughly enjoyed their last book (When to Use What Research Design) published in 2013 on the use of data collection methods and sampling. This new book is supposed to complete the research process discussed in the first book.

Unfortunately, I am still reading this second book, but felt compelled to make a comment about the omission of information in the subsection of the book, called “Technologies for Recording Observational Data,” on page 119. This section literally throws in a few sentences on visual sociology. The authors define the term and include examples for each section of the definition. The next paragraph stated:

“Visual social research seems underutilized. Rigor in the coding and analysis of visual data does not appear to us to have progressed much beyond nor often attained the levels demonstrated in the classic by Gregory Bateson and Margaret Mead, Balinese Character: A Photographic Analysis, published more than 70 years ago. Mead and Bateson did not simply illustrate, they integrated photographic data into their analyses. [Next paragraph] With the easy availability of photographic technology, for example on cell phones, one might expect visual sociology and anthropology to have become more widespread. Perhaps visual recordings have remained underutilized because of regulations regarding research ethics, particularly the anonymity of research participants…” (pg. 119-120).

WOW. To say the least. The authors refer to the Visual Studies Journal,  the International Visual Sociology Association, Pierre Bourdieu (1965) work, and Douglas Harper’s (1988) article in the journal as official references for anything regarding visual sociology. Having published in the journal and having chaired and presented at the Association’s conferences repeatedly and having met Dr. Harper and other hard working sociologists at one of those conferences, I would have to say that the Vogt, Vogt, Gardner and Haeffele need to seriously apologize for their inaccurate comments and update the information in their book. They singlehandedly disrespected an entire discipline and highlighted just how ill-informed they are even after appearing to do a proper literature review for this section of their book.

At the very least, they could have referenced Ball and Smith’s 1992 book, Analyzing Visual Data, to see that coding and analysis efforts in the 90’s were trying to advance the discipline. What about van Leeuwen and Jewitt’s (2001) Handbook of Visual Analysis? This handbook offers details about “cooking” the data and preparing the visual data for analysis. Even my little article with Dr. Margolis, which was published in the Visual Studies Journal, Fram and Margolis 2011, offers an example of how to code and analyze visual data and how to apply our new coding method, archivization.

Their comment gives off the sense that an entire discipline has not done anything to advance for more than 50 years. At the very least, the section is written in such a way that it lends itself to conveying serious misunderstandings. I can agree that any and all disciplines have their moments in history when they do not advance or they are stuck and not progressing, but to totally discount a group of people who have worked hard to advance visual sociology after 1942 and up to Pierre Bourdieu (1965), then after 1965 and up to 1988 with Harper’s article, then after 1988…Come on.

What the heck…

The authors need to offer some serious clarification and an apology.

I will continue to read this latest book by the authors’ because now I am concerned if more information has been omitted.

I am speechless….


Ball, M.S. and Smith, G.W.H. 1992. Analyzing Visual Data. Newbury Park, CA: Sage Publications.

Bourdieu, P. 1965. Un art moyen: Essai sur les usages sociaux de photographie. Paris: Ed. du Minuit.

Harper, D. 1988. Visual sociology: Expanding sociological vision. American Sociologist, 21, 54-70.

Van Leeuwen, T. and Jewitt, C. 2001. Handbook of Visual Analysis. London: Sage Publications, Ltd.

Vogt, W.P., Vogt, E.R., Gardner, D.C. and Haeffele, L.M. 2014. Selecting the right analyses for your data: quantitative, qualitative, and mixed methods. New York: Guilford Press.

Using quantitative analyses on qualitative data gathered: Is there a recipe for success among the methods used?

I have always been a supporter of the use of the Mixed Methods research methodology. To reiterate from my previous posts, It is all about answering the research questions. The research questions decide what methodology and what methods to use to answer it.

In the 2014 article,”Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review,” by Fakis, Hilliam, Stoneley and Townend, a point is treated inconsequential. Overall, the authors present a strong argument for using quantitative analyses methods on qualitative information to generate new hypotheses and to test theories. This post addresses the inconsequential point that “the quantification of
qualitative information is not related to specific qualitative technique and is not an interest only
for specific type of qualitative researchers” (p. 156).

I have to disagree and state that this topic should be further investigated. Based on my experience with and knowledge of methods use I recognize that a method’s essential process of reducing (versus organizing) data is a key factor in a successful use of quantitative analysis methods to extract macro and meso level patterns from the data. In my 2013 article (posted on my blog somewhere!), I clearly show a complex reduction process for the constant comparative analysis method. Such a process could work as an advantage for using a particular quantitative analysis method. I am not an expert in quantitative analysis, but from my experience if you have thoroughly and effectively reduced the qualitative data during a qualitative analysis, this leads to less variation in the independent variables when you begin to use a quantitative analysis method; automatically, the researcher gets a stronger relationship between the independent and dependent variables. This highlights that the reducing and reorganizing stages of specific qualitative analysis methods make them more suitable for using with specific quantitative analysis methods. In general, the mixed methods methodology is grounded in this logic, but the authors seem to brush off this connection recognized in their literature review.

The authors stated that the content analysis method was commonly used and showed more valid and reliable results when accounting for an acceptable sample size. Their misunderstanding occurs in their passive inclusion of “grounded theory for analyzing” instead of taking into account the strengths of the constant comparative analysis method outside of grounded theory for reducing data. Similar to an algorithm, a researcher works the data by reducing and/or reorganizing following a finite list of well-defined steps. It is logical to assume that specific qualitative analysis methods with more precise reduction processes are better suited for the use of specific quantitative analysis methods.

As I have stated in a previous post, many published journal articles are not offering enough details about the data-coding stage of their research project. This issue easily can contribute to misguided understandings about the preciseness of reduction processes for specific qualitative analysis methods like the content analysis method and the thematic analysis method. Content Analysis has its origins in quantitative research; therefore, it is hardly a jump to be able to use quantitative analyses on qualitative information involving the use of the content analysis method. Traditionally, thematic analysis has been a qualitative method. Recent qualitative analysis software made the jump easier.

The authors stated:

…the statistical analysis of qualitative information was observed more in data derived from content analysis, which is used for extracting objective content from texts for identifying themes and patterns (Hsieh &Shannon, 2005). the type of information extracted from the content analysis could be measured and transformed to quantitative data more regularly than using other methods of qualitative analysis. In six studies content analysis was initially performed for analyzing qualitative data. However, six of the articles in the review used thematic analysis or grounded theory for analyzing the qualitative information. The variety of qualitative methods used before the statistical methods are applied could indicate that the quantification of qualitative information is not relateed to specific qualitative technique and is not an interest only for specific type of qualitative researchers (p. 156).

The authors bring up a valuable point, but treat it with less importance. Future research on methods use in mixed methods research should include an investigation of the data coding stage regarding the initial analysis of qualitative data using qualitative analysis methods and the complimentary use of quantitative analyses methods to gleam new hypotheses or to test a theory. This investigation should focus on the reduction and reorganization processes that occur during analysis and highlight which qualitative analysis methods are more suitable for a follow-up use of particular quantitative analysis methods. The authors’ suggestion for future research is for effort towards developing an “advanced statistical modeling method that will be able to explore the complex relationships arising from the qualitative information” (p. 158). They need to take one step back and look at the connection between the processes of reducing the data between the qualitative and quantitative methods first.

I believe that the authors missed out on an opportunity to investigate a new hypothesis, missed out on setting the stage to influence others to advance the methodology, and missed out on giving mixed methods designs more credit than they did in their article.


Fakis, A., Hilliam, R., Stoneley, H., Townend, M. 2014. Quantitative Analysis of Qualitative Information From Interviews: A Systematic Literature Review. Journal of Mixed Methods Research, 8(2): 139-161.

Hsieh, H. F., & Shannon, S. E. 2005. Three approaches to qualitative content analysis. Qualitative Health Research, 15, 1277-1288.

Destructive Behavior in Evaluations

This posting discusses what some evaluators have experienced as destructive behaviors by stakeholders during evaluation projects.

For example, my experience and the experiences of my associate while evaluating a safety program for a school district is not discussed in our article, “How the School Built Environment Exacerbates Bullying and Peer Harassment (Fram and Dickmann 2012).” Our experiences were not as extreme as we have heard, but have similarities to such experiences by Bechar & Mero-Jaffe (2014), who stated in their article:

Overtly, the program head had agreed to the evaluation and recognized its importance, covertly, his attitude showed inconsistency in his willingness to support the evaluation, which was expressed during the course of the evaluation and in his reaction to the final report; we interpret these as sign of fear of evaluation (p. 369).

We did not interpret our experience as a sign of fear, but a serious issue involving the cohesiveness of district leaders involving the superintendent and all of the principals of the schools. Our evidence pointed towards the leadership style of the superintendent and the politics embedded in the school system. We even stated in our article that politics at the district level had a major negative impact on safety program at the schools. If any fear of evaluation from the stakeholders existed if was purely concerning the possible loss of their job. Any fear we witnessed was a symptom of the dysfunctional relationships among the stakeholders.

Bechar & Mero-Jaffe refer to Donaldson’s (2007) introduction of a new term, excessive evaluation anxiety (XEA). I am inclined to say that our experiences had little connection if any to this phenomenon.

I am proposing two ways to lower the possibilities of experiencing destructive behavior from stakeholders.

1. More thoroughly plan the initial stakeholder meeting and incorporate an informative session (whether on another day or same day) to discuss the perceptions of and concerns of all of the stakeholders involved. Meet with individual stakeholders afterwards to further address their concerns in private.

We did have an informative and well-planned meeting with the stakeholders together and my associate and I both met separately with each stakeholder to address any concerns that they had. For the most part, we believe that the individual meetings improved our chances of collecting data effectively with support from the stakeholders. One of the principals wanted to be involved and we made this happen even though the superintendent did not want any principal involved at any stage. We asked the principal to help us orchestrate the informed consent meeting involving the teachers and I asked the principal to walk with me as I took photographs to offer me insight about particular spaces of the school. My associate was able to show the superintendent the benefit of having the principal involved.

2. Before the initial stakeholder meeting schedule and orchestrate a focus group designed from a social constructivist perspective or what Ryan, Gandha et al. (2014) called a Type B focus groupThe style of focus group reveals tacit knowledge during social participation. This focus group is effective at getting at what is hidden during social interactions. With targeted questions intermingled among conversation, a moderator, can tease out underlying politics, beliefs and facades of social relationships. In addition, targeted and informative comments by the moderator can help to better inform the stakeholders about the evaluation process and eliminate confusion and misunderstandings. Ultimately, this additional data adds richness.

I believe that had we completed such a focus group with all of the stakeholders, we would have had a more effective evaluation; which would have benefited all of the stakeholders. We were not able to collect all of the necessary data needed to complete a thorough evaluation of the safety program in all of the schools. We were only allowed to focus on the elementary schools for collecting data.

I digress:

The Bechar & Mero-Jaffe article does concern me. As an outsider regarding the experiences, I have to say that while reading the article, I felt that the authors were still too sensitive about their experiences. Their description and choice of information to present regarding the program head as one of the stakeholders was unsettling to me. I question whether or not this article should have been published at all.

Regarding the use of the term, excessive evaluation anxiety, I have to add that just because destructive behavior exists does not mean that it always points to some anxiety about evaluation. In our case, the destructive behavior pointed to a dysfunctional relationship among the stakeholders and how such relationships had a negative impact on the safety program.



Bechar, Shlomit & Mero-Jaffe, Irit. (2014) Who is afraid of evaluation? Ethics in evaluation research as a way to cope with excessive evaluation anxiety: Insights from a case study. American Journal of Evaluation, 35(3): 364-376.

Donaldson, S. I. (2007). Program theory-driven evaluation science: strategies and applications. London, England: Routledge.

Ryan, K. E., Gandha, T., Culbertson, K. J. and Carlson, C. (2014). Focus group evidence: Implications for design and analysis. American Journal of Evaluation, 35(3): 328-345.