Friday, March 25, 2011

Say phenomenological three times... fast!

Hypothetically speaking of course, if I were to decide to pursue a qualitative study of teacher organizational citizenship behaviors and pay-for-performance incentives, I could do either a phenomenological or an ethnographic study. The chosen approach would depend on whether I focused on just the identification and existence of extra-role behaviors and a shared cooperative culture of teachers (ethnography) or on their experiences as they relate and react to the implementation of an incentive pay system (phenomenological study ). Or perhaps I could accomplish both!

In the case of an ethnography, I would start with a foreshadowed problem generated from the active debate within existing research and theory about the existence and definition of teacher organizational citizenship behaviors. I might choose to focus my study on high school teachers and then enter into a relationship with a particular high school in a particular district for the express purpose of sampling, observing, obtaining and analyzing data about the behaviors outside of those required by formal teacher job descriptions. I would seek to identify a pattern from these observations and data about why, how, when and for whom teachers express OCB.

Backing up to review individual aspects of an ethnography, in that I cannot possibly study the extra-role behaviors of a comprehensive sample of high school teachers across the U.S., or even likely all teachers in a particular district, I would have to engage with a purposeful sample by case of teachers in one or two particular high schools. To protect validity and support any efforts at generalizability , I would need to identify all of the subject/selection threats that were inherent in the demographics of the district and school with which I chose to engage.

To obtain data, I would work as a moderate participant to record comprehensive observations, both structured and unstructured. I would also arrange to observe applicable committee and team meetings as well as ask questions about behaviors that I do observe. Interviews would also be very important to my study so that I could engage individual teachers in conversation about their behaviors and their own observations of others’ behavior. I foresee that document reviews would be valuable in that I could review the job descriptions and performance evaluations of the teachers to ensure that I could accurately differentiate what was in-role versus extra-role behavior and expectations.

A separate, additional phenomenological study might also be intriguing in that I could then build a further understanding of teacher OCB as it is influenced by the implementation or existence of incentive pay schemes. It would be very cool to query teachers through in-depth interviews or focus groups about their observations, reactions, and feelings (drama!) about pay-for-performance. If I was then able to interpret the data with any significant measure of reliability, I could hope to build new understanding about the use of incentive pay for teachers. Insight on how teachers experience pay –for-performance vis a vis the behaviors that they report being incentivized or discouraged would be very valuable feedback to have when considering the design of future teacher compensation systems.

Outside of this particular research problem relating to teacher behavior, I think a phenomenological study of pay-for-performance in any corporate environment could be extremely meaningful. There are a plethora of reports and studies that pay-for-performance is ineffective, broken, and not worthwhile but I don’t recall seeing any that try to derive meaning about pay-for-performance systems based on a qualitative investigation of participant experiences with it.

Friday, March 18, 2011

A sample of my literature review

Study #1: Extra-role behaviors: job satisfaction, efficacy and multiple dimensions

The first study was by researchers in Israel who looked at the components of extra-role behavior (ERB) and the correlational relationship between ERB and three variables: job satisfaction, self-efficacy, and collective efficacy. The researchers defined extra-role behavior for the purposes of their study as,
"those behaviors that go beyond specified role requirements, and
are directed towards the individual, the group, or the organization as a unit,
in order to promote organizational goals." (Somech & Zahavy, 2000, p.650)

The sampling frame was elementary school teachers in northern Israel. Questionnaires were distributed to 375 teachers from 13 different elementary schools, of which 251 were returned to yield a response rate of 67%.

To develop a measure of extra-role behavior, they conducted a semi-structured interview of five principals and 25 teachers who were asked to list behaviors, outside of formal role requirements, that teachers exhibit to benefit of the student, their team or the school unit. The 60 items produced from this process were validated by teachers who were involved in a management-training program. The list was reduced in this way and then through factor analysis, it was organized into three subscales of eight items each, which measured ERB toward the student, towards the team, and towards the organization.

To measure the variables of job satisfaction, self-efficacy, and collective efficacy, three additional scales based on prior research were adapted for use with teachers. All three of these measures had respondents indicating their agreement or disagreement with the test items using a five- or seven-point Likert scale.

To test the first hypothesis regarding the components of ERB, the researchers performed a factor analysis which yielded three separate factors of ERB. First, the Factor 1 dimension included behaviors that were geared towards helping other teachers and so was recognized as ERB towards team members. Factor 2 behaviors were generally identified as those that were aimed at improving conditions for the school as a unit and so was terms ERB towards the organization. Factor 3 consisted of behaviors intended to improve student performance or the quality of the teaching. It was termed ERB towards the student.

Correlation analysis was used to test the other hypotheses regarding the relationship between job satisfaction, self-efficacy, and collective efficacy and each of the three levels of ERB (team, organization, student). A positive and significant relationship was found between teacher satisfaction and ERB at all three levels. Self-efficacy and ERB towards the team and the organization were also found to be correlated but there was no relationship proven between self-efficacy and ERB towards the student. There was a positive and significant relationship found between collective efficacy and ERB towards the team, but no relationship between collective efficacy and ERB towards the organization or the student.

The results of the factor analysis support the discussion of ERB as multidimensional. The extent to which the three different variables were or were not correlated to the various levels of ERB further support this. All together, the research suggests that formal efforts to increase the levels of job satisfaction, self-efficacy or collective efficacy will have varying impacts on levels and types of extra-role behaviors.

Study #2 – Organizational Citizenship Behavior and Pay-for-Performance plans

This study addressed whether there was an unintended negative interaction between pay-for-performance plans designed to reward and encourage certain in-role behaviors and those behaviors that are not formally rewarded or controlled through job descriptions but which yield value to the organization. Also, the researchers looked to identify whether the degree of interest alignment between the employer and the employee moderated the impact of pay-for-performance plans on those organizational citizenship, or extra-role, behaviors.

The sampling frame was employees in certain benchmark jobs and their supervisors, from eight different utility companies across the United States. The supervisors of the benchmark jobs in each company were chosen first and then asked to randomly select up to 10 subordinates who were then surveyed about attitudes towards the job and the work organization (n = 660). The sample was further culled by matching to supervisor responses and confirmation of their participation in a pay-for–performance plan. The subsample remaining consisted of 146 employees.

The survey issued to participants included items from several pre-existing measures of organizational citizenship behaviors, value alignment, and procedural justice, as well as items developed purposely to measure the employee’s perception of the link between performance and pay.

A regression analysis allowed the researchers to determine the extent to which the effect of the pay-for-performance link on OCB was influenced by the employee’s value commitment. It was found that if the employees are not aligned with the organization, their level of OCB will be lower as they perceive a stronger link between their performance and their pay. Similarly, if the employee is committed to the organization, their OCB will increase as the pay:performance link is stronger. Procedural justice was found to be a significant predictor of OCB.

This research is valuable in indicating that pay-for-performance plans will not necessarily negatively affect the level of organizational citizenship behavior displayed by the employee. In fact, if the employee and organization are “mutually invested”, employees may actually increase their OCB in response to the organization’s commitment to reward performance with pay.

On the topic of validity, both studies described limitations of their research and results. First, the measures utilized in both were either specially designed for the purpose or adapted from different pre-existing measures. There was no significant discussion in either of the studies that the survey items were themselves validated. Second, although relationships between variables and OCB were evidenced, there was no testing of the directionality of the effects. Finally, both studies used only self-reports of behavior and attitude. Again, this means that the results indicate relationships but only from the perspective of one source type. Therefore, the results of both studies are not generalizable and so much less useful in pointing a clear and convincing way forward on specific policy or program changes that can be implemented to increase or protect OCB.

References:

Somech, A., & Drach-Zahavy, A. (2000). Understanding extra-role behavior in schools: The relationships between job satisfaction, sense of efficacy, and teachers’ extra-role behavior. Teaching and Teacher Education, 16(5-6), 649-659. DOI: 10.1016/S0742-051X(00)00012-3

Deckop, J. R., Mangel, R., & Cirka, C. C. (1999). Research notes. getting more than you pay for: Organizational citizenship behavior and pay-for-performance plans. Academy of Management Journal, 42(4), 420-428.

Sunday, March 6, 2011

Managing validity

The concept of validity has been a lecture or exam topic in at least one class during each of my five semesters in graduate school. I have been asked to understand the validity of correlations between the Big Five personality traits with job performance, explain the validity of different tools used in recruiting and staffing and now finally, the validity of my own research. Over this time period, I had become familiar with, and even memorized, various definitions of validity and yet, I was still quite impressed with the explanation that was offered in our textbook. Dr. McMilllan provided an all of a sudden more relevant understanding for me, perhaps only partly because I knew that it was a concept worth 6% of our research proposal grade. On page 144, Dr. McMillan advises that validity is a “judgment of the appropriateness of a measure for the specific inferences or decisions that result from the scores generated by the measure.” For some reason, this explanation was much more about the “big picture of validity” than the other technical definitions I had learned previously. For me now, validity is as much about what I do or decide as a result of my findings as it is about how accurate the tools are that I use or the relationships that are observed. So as I develop this research aimed at observing the impact of merit pay incentives on teacher’s organizational citizenship behaviors, I want to carefully protect the instrument, the subjects, the treatment and the environment so that I am able to present inferences and suggest actions that are altogether appropriate and sound.

Although I am still searching for the best measure of organizational citizenship behaviors, I can foresee and will manage the threats to validity that the instrument itself can present. For instance, I know already that I cannot use a tool “off the shelf” that measures organizational citizenship behaviors (OCB) generally but instead, I require a measure that has been adjusted for the school setting. Luckily, there is a measurement model that has been developed for use with teachers but it includes additional scales that may not be relevant to the OCB discussion. So if I do make changes to this tool, the validity of the edited measure can only be confirmed if I reanalyze it’s “fit” to the theoretical model and to any other potentially valid measures.

In this study, I propose to measure and compare the levels of OCB exhibited by teachers who receive performance pay incentives to those who receive just a standard scheduled salary. In that the incentive compensation provided to the teachers is the “treatment”, I do not anticipate that the control group of teachers who receive only straight salary will inadvertently be “treated”. I do however need to stay extremely wary that both the control and the experimental group may be receiving a “treatment” via their environments, i.e. their departments, buildings, districts and communities, that impact their OCB levels. In this study then, the dependent variable is more at risk of being compromised by extraneous events, timelines and settings than the independent variable.

I sense that the greatest risk to validity comes in the course of selecting and maintaining study’s subjects themselves. As we all know, people are strange and generally act weirdly, especially when it comes to professional motivations and behaviors. That said, as much as I attempt to control for natural and cultural characteristics that affect OCB, I will undoubtedly fail to identify groups of subjects that are absolutely equal in their predispositions. Working in my favor though, previous studies of OCB have shown that individual characteristics like personality have limited associations with OCB and so it has been suggested that OCB is a result of cognitive and rational judgments and not emotional ones. If it were the reverse, I would need to even more carefully control for subject affectivity.

The effect of time and the announcement of the study itself on my subjects is another obvious threat to the validity of my study. Since it is not practical to announce the study and collect all the necessary data at one time, I can presume that my presence and the notice of the study will operate as an intervention in and of itself. The observation of OCB may already be compromised by changes in teacher’s actions in between the time the study is announced and when data is collected. As I finalize and summarize my literature review, I intend to keep my eye open for discussions about how this specific threat to validity can be minimized. I imagine it is a threat not only for my study but most others out there as well.

Even as I manage these threats to validity through careful design of the measure, documentation of the environment, and the “proper care and feeding” of my subjects, data analysis may still yield results that prevent any meaningful inferences or recommendations from being drawn. Although that would be very unfortunate, as Dr. McMillan would have us understand the concept of validity, recognition that a decision or inference should not be drawn from the study may still be a valid conclusion.