Tuesday, April 19, 2011

Counting feelings ... a Pirate Honor Code



A few points from the last class session really struck me so I want to memorialize them in this blog. (Or perhaps I am just procrastinating on my research proposal?)

First, we just happened to wander onto the Count's video and it worked great as an illustration of the importance of counting in meaning making. But, his song about counting feelings is also an apt reminder of the need to strike balance between quantitative and qualitative methods. It also aligned with my last blog about being a pirate and ninja, in equal measure. A “feeling” is an important indicator of a participant’s reality. All of the participant’s emotions and reactions should be identified and added up. It is also critical for the researcher to acknowledge and understand her own feelings as she stays attentive to bias and open to new meanings.

Second, this leads to the last few slides we were shown from the Class 9 deck on qualitative analysis. I’ve summarized the points here as a sort of “Pirate Honor Code”:

On my honor, as a Pirate, I will strive to be a good writer and fair observer. I will be respectful, reliable and trustworthy. I will always endeavor to remain open and honest about my biases, and to look for new meaning and patterns in every observation that I make.

The Count said it well… “Feeling all these feelings makes me feel exhausted!” On his advice, I am going to go catch 40 winks.

Sunday, April 10, 2011

Pirate and ninja, in equal measure

Prior to the beginning of this course in January, I certainly did not appreciate the concepts of pirates and ninjas as I do now. I also certainly wouldn’t been as tickled as I was to have the following conversation with my 10-year old daughter yesterday:

K: “Mom, I have to find a ninja school to enroll in!”

Me: “But, K., ninjas aren’t seen or heard so you’ll have to be really quiet. Are you sure you’ll enjoy being a ninja?”

K: “I know they’re quiet but they also climb walls. I have to go to ninja school to learn to do both!”

Anticipating the end of this course and indeed the end of my graduate coursework, I would say that I have learned the value of being a pirate and a ninja, in equal measure. I have learned to be a stealthy and quiet ninja at times, and an inquisitive and reflective pirate at others. I have filled my tool belt with both quantitative and qualitative research methods, designs, techniques and procedures. I understand the difference between positing a hypothesis and identifying a foreshadowed problem and how to proceed when researching each one. I know now how to find giants and how to stand on their shoulders when they have already done the hard work for me. I also know how to tell when they really aren’t giants at all.

After two and a half years of exploring this curiosity and that whim as a way to pay some of the household and tuition bills, I have just accepted a full-time job back in my original profession of Human Resource Management. Since I am thinking constantly anyway about this new role, I’d like to use this blog post to reflect on how I can use this course’s lessons to improve my performance at this organization and how I can translate my new-found skills and abilities into success for the business.

Two weeks from now I will begin working as an HR manager for a relatively young organization which contracts with government agencies to provide hardware installation and support. The company has doubled in size in the past year to about 70 employees and anticipates needing to double again within six months. I will be HR person #2; hired to develop a competitive and sustainable employee benefits program as well as to build a full corporate curriculum covering customer service, organizational policies and procedures, leadership development, and performance management.

In the past, I have tended to approach new jobs and business relationships where I am expected to play an expert or advisory role, as a ninja. I naturally prefer to provide proof for my recommendations through credentials, past work samples, and statistical data. Now, I can add to this tendency, my new skills in finding quantitative research in peer-reviewed journals and in assessing the validity of studies, to develop my recommendations. As this company is in start-up mode, I foresee a specific need to research and clearly communicate study results to management, about best practices for emerging companies around benefit plan design (what types of coverage, plan designs, and employer/employee cost sharing percentages), management:staff ratios, and employee training among other things. I’ll also use these studies to encourage data-driven reflection by the leaders prior to developing and implementing our new structures and policies. I hope to also propose and test hypotheses about what will happen in this particular company with variables like expected enrollment levels, productivity standards, and transfer of learning rates from classroom to desktop. Even though I will not (purposely) undertake experimental research in my new role, I will still be able to quantitatively “study” the impacts of new programs, policies and procedures even without any formal control groups.

But I am most excited about altering my approach to be more like a pirate. I can consider this new job as an opportunity to conduct an ethnography. In the coming months and hopefully, for even longer, I will be able to live within this organization and culture, both as an observer and a participant. As I advise the company’s leadership about ways to develop and grow wisely, I will likely need to serve as a mirror in some cases and to be an interviewer in others. I anticipate formulating grounded theories over time about what works and doesn’t work, and what motivates or discourages. I also hope that there will be one or more employees, managers, or departments who will be great case studies in superior customer service, effective process improvement, innovation, or cross-border teamwork. I also look forward to using qualitative methods to develop a new performance management system. I know already to begin with dialogue and interviews about what performance is and should be and what the culture can possibly support in the way of tools, forms and procedures.

Regardless of the design, a constant respect for ethical behavior will be necessary. I will apply the required researcher behaviors that we identified via our grounded theory activity to my role. Obviously, I will need to ensure confidentiality for all employees and managers, as well as to manage the risks of any “research” I conduct. I will also ensure that when I leave the organization, the leaders, the employees and the business will be in a better place and a stronger position than where I find them.

Undoubtedly, a mixed-methods approach will best serve me and this organization. I look forward to being a pirate and ninja, in equal measure!

Sunday, April 3, 2011

Skip to the End!

This is one of my favorite lines from my favorite movie of all time, The Princess Bride. Prince Humperdink was trying to make sure the “mahwage” ceremony was finalized before Buttercup was rescued by the Dread Pirate Roberts.

I was reminded of it when Professor Croasdaile used a similar line in response to my plea for help on the Introduction/literature review section of the proposal. Prior to class last Thursday, I had spent all of Wednesday and the better part of the prior weekend trying to get started on filling in my outline of major concepts and topics. This first section was clearly the biggest challenge of the project so far for me.

But, the advice was sound. When I sat down to get started again on Friday morning, I found that putting together the pieces and parts of the sample, procedure and measures was much easier than trying to first be eloquent about concepts and theories. I was able to address the nitty-gritty details (who/what/when/how) of the question I wanted to answer which in turn guided me back to a review of the literature that had piqued my interest in the first place.

After this rough start, I feel pretty good about the draft I’ve just turned in. I do hope to receive feedback on some additional concerns though. I worry about the measure that I’ve created in Appendices A & B. It is adapted from other measures that have been validated separately but never as a whole. How do I express the need to validate the instrument and how specific do I need to be about how that will be done?

I am unsure if I’ve transferred everything that is in my head down onto the paper. Have I fully defined all of the variables for the reader or have I left out points because I just naturally understand them after reading for the past two and a half months? Really, I am having trouble reading it objectively at this point.

My task ahead: let this sit for a while. Then, with a refreshed mind, I’ll hope to be able to process the feedback and finish this proposal in time for the April 26 deadline.

As a side comment, I think the advice to "Skip to the end" is valuable in other settings as well. Sometimes, giving ourselves permission to move on to the people, the details, the process, and the execution lends great clarity to the end goal and desire. Sometimes, not having every idea researched, source uncovered, and thought written out perfectly is the way forward.

Friday, March 25, 2011

Say phenomenological three times... fast!

Hypothetically speaking of course, if I were to decide to pursue a qualitative study of teacher organizational citizenship behaviors and pay-for-performance incentives, I could do either a phenomenological or an ethnographic study. The chosen approach would depend on whether I focused on just the identification and existence of extra-role behaviors and a shared cooperative culture of teachers (ethnography) or on their experiences as they relate and react to the implementation of an incentive pay system (phenomenological study ). Or perhaps I could accomplish both!

In the case of an ethnography, I would start with a foreshadowed problem generated from the active debate within existing research and theory about the existence and definition of teacher organizational citizenship behaviors. I might choose to focus my study on high school teachers and then enter into a relationship with a particular high school in a particular district for the express purpose of sampling, observing, obtaining and analyzing data about the behaviors outside of those required by formal teacher job descriptions. I would seek to identify a pattern from these observations and data about why, how, when and for whom teachers express OCB.

Backing up to review individual aspects of an ethnography, in that I cannot possibly study the extra-role behaviors of a comprehensive sample of high school teachers across the U.S., or even likely all teachers in a particular district, I would have to engage with a purposeful sample by case of teachers in one or two particular high schools. To protect validity and support any efforts at generalizability , I would need to identify all of the subject/selection threats that were inherent in the demographics of the district and school with which I chose to engage.

To obtain data, I would work as a moderate participant to record comprehensive observations, both structured and unstructured. I would also arrange to observe applicable committee and team meetings as well as ask questions about behaviors that I do observe. Interviews would also be very important to my study so that I could engage individual teachers in conversation about their behaviors and their own observations of others’ behavior. I foresee that document reviews would be valuable in that I could review the job descriptions and performance evaluations of the teachers to ensure that I could accurately differentiate what was in-role versus extra-role behavior and expectations.

A separate, additional phenomenological study might also be intriguing in that I could then build a further understanding of teacher OCB as it is influenced by the implementation or existence of incentive pay schemes. It would be very cool to query teachers through in-depth interviews or focus groups about their observations, reactions, and feelings (drama!) about pay-for-performance. If I was then able to interpret the data with any significant measure of reliability, I could hope to build new understanding about the use of incentive pay for teachers. Insight on how teachers experience pay –for-performance vis a vis the behaviors that they report being incentivized or discouraged would be very valuable feedback to have when considering the design of future teacher compensation systems.

Outside of this particular research problem relating to teacher behavior, I think a phenomenological study of pay-for-performance in any corporate environment could be extremely meaningful. There are a plethora of reports and studies that pay-for-performance is ineffective, broken, and not worthwhile but I don’t recall seeing any that try to derive meaning about pay-for-performance systems based on a qualitative investigation of participant experiences with it.

Friday, March 18, 2011

A sample of my literature review

Study #1: Extra-role behaviors: job satisfaction, efficacy and multiple dimensions

The first study was by researchers in Israel who looked at the components of extra-role behavior (ERB) and the correlational relationship between ERB and three variables: job satisfaction, self-efficacy, and collective efficacy. The researchers defined extra-role behavior for the purposes of their study as,
"those behaviors that go beyond specified role requirements, and
are directed towards the individual, the group, or the organization as a unit,
in order to promote organizational goals." (Somech & Zahavy, 2000, p.650)

The sampling frame was elementary school teachers in northern Israel. Questionnaires were distributed to 375 teachers from 13 different elementary schools, of which 251 were returned to yield a response rate of 67%.

To develop a measure of extra-role behavior, they conducted a semi-structured interview of five principals and 25 teachers who were asked to list behaviors, outside of formal role requirements, that teachers exhibit to benefit of the student, their team or the school unit. The 60 items produced from this process were validated by teachers who were involved in a management-training program. The list was reduced in this way and then through factor analysis, it was organized into three subscales of eight items each, which measured ERB toward the student, towards the team, and towards the organization.

To measure the variables of job satisfaction, self-efficacy, and collective efficacy, three additional scales based on prior research were adapted for use with teachers. All three of these measures had respondents indicating their agreement or disagreement with the test items using a five- or seven-point Likert scale.

To test the first hypothesis regarding the components of ERB, the researchers performed a factor analysis which yielded three separate factors of ERB. First, the Factor 1 dimension included behaviors that were geared towards helping other teachers and so was recognized as ERB towards team members. Factor 2 behaviors were generally identified as those that were aimed at improving conditions for the school as a unit and so was terms ERB towards the organization. Factor 3 consisted of behaviors intended to improve student performance or the quality of the teaching. It was termed ERB towards the student.

Correlation analysis was used to test the other hypotheses regarding the relationship between job satisfaction, self-efficacy, and collective efficacy and each of the three levels of ERB (team, organization, student). A positive and significant relationship was found between teacher satisfaction and ERB at all three levels. Self-efficacy and ERB towards the team and the organization were also found to be correlated but there was no relationship proven between self-efficacy and ERB towards the student. There was a positive and significant relationship found between collective efficacy and ERB towards the team, but no relationship between collective efficacy and ERB towards the organization or the student.

The results of the factor analysis support the discussion of ERB as multidimensional. The extent to which the three different variables were or were not correlated to the various levels of ERB further support this. All together, the research suggests that formal efforts to increase the levels of job satisfaction, self-efficacy or collective efficacy will have varying impacts on levels and types of extra-role behaviors.

Study #2 – Organizational Citizenship Behavior and Pay-for-Performance plans

This study addressed whether there was an unintended negative interaction between pay-for-performance plans designed to reward and encourage certain in-role behaviors and those behaviors that are not formally rewarded or controlled through job descriptions but which yield value to the organization. Also, the researchers looked to identify whether the degree of interest alignment between the employer and the employee moderated the impact of pay-for-performance plans on those organizational citizenship, or extra-role, behaviors.

The sampling frame was employees in certain benchmark jobs and their supervisors, from eight different utility companies across the United States. The supervisors of the benchmark jobs in each company were chosen first and then asked to randomly select up to 10 subordinates who were then surveyed about attitudes towards the job and the work organization (n = 660). The sample was further culled by matching to supervisor responses and confirmation of their participation in a pay-for–performance plan. The subsample remaining consisted of 146 employees.

The survey issued to participants included items from several pre-existing measures of organizational citizenship behaviors, value alignment, and procedural justice, as well as items developed purposely to measure the employee’s perception of the link between performance and pay.

A regression analysis allowed the researchers to determine the extent to which the effect of the pay-for-performance link on OCB was influenced by the employee’s value commitment. It was found that if the employees are not aligned with the organization, their level of OCB will be lower as they perceive a stronger link between their performance and their pay. Similarly, if the employee is committed to the organization, their OCB will increase as the pay:performance link is stronger. Procedural justice was found to be a significant predictor of OCB.

This research is valuable in indicating that pay-for-performance plans will not necessarily negatively affect the level of organizational citizenship behavior displayed by the employee. In fact, if the employee and organization are “mutually invested”, employees may actually increase their OCB in response to the organization’s commitment to reward performance with pay.

On the topic of validity, both studies described limitations of their research and results. First, the measures utilized in both were either specially designed for the purpose or adapted from different pre-existing measures. There was no significant discussion in either of the studies that the survey items were themselves validated. Second, although relationships between variables and OCB were evidenced, there was no testing of the directionality of the effects. Finally, both studies used only self-reports of behavior and attitude. Again, this means that the results indicate relationships but only from the perspective of one source type. Therefore, the results of both studies are not generalizable and so much less useful in pointing a clear and convincing way forward on specific policy or program changes that can be implemented to increase or protect OCB.

References:

Somech, A., & Drach-Zahavy, A. (2000). Understanding extra-role behavior in schools: The relationships between job satisfaction, sense of efficacy, and teachers’ extra-role behavior. Teaching and Teacher Education, 16(5-6), 649-659. DOI: 10.1016/S0742-051X(00)00012-3

Deckop, J. R., Mangel, R., & Cirka, C. C. (1999). Research notes. getting more than you pay for: Organizational citizenship behavior and pay-for-performance plans. Academy of Management Journal, 42(4), 420-428.

Sunday, March 6, 2011

Managing validity

The concept of validity has been a lecture or exam topic in at least one class during each of my five semesters in graduate school. I have been asked to understand the validity of correlations between the Big Five personality traits with job performance, explain the validity of different tools used in recruiting and staffing and now finally, the validity of my own research. Over this time period, I had become familiar with, and even memorized, various definitions of validity and yet, I was still quite impressed with the explanation that was offered in our textbook. Dr. McMilllan provided an all of a sudden more relevant understanding for me, perhaps only partly because I knew that it was a concept worth 6% of our research proposal grade. On page 144, Dr. McMillan advises that validity is a “judgment of the appropriateness of a measure for the specific inferences or decisions that result from the scores generated by the measure.” For some reason, this explanation was much more about the “big picture of validity” than the other technical definitions I had learned previously. For me now, validity is as much about what I do or decide as a result of my findings as it is about how accurate the tools are that I use or the relationships that are observed. So as I develop this research aimed at observing the impact of merit pay incentives on teacher’s organizational citizenship behaviors, I want to carefully protect the instrument, the subjects, the treatment and the environment so that I am able to present inferences and suggest actions that are altogether appropriate and sound.

Although I am still searching for the best measure of organizational citizenship behaviors, I can foresee and will manage the threats to validity that the instrument itself can present. For instance, I know already that I cannot use a tool “off the shelf” that measures organizational citizenship behaviors (OCB) generally but instead, I require a measure that has been adjusted for the school setting. Luckily, there is a measurement model that has been developed for use with teachers but it includes additional scales that may not be relevant to the OCB discussion. So if I do make changes to this tool, the validity of the edited measure can only be confirmed if I reanalyze it’s “fit” to the theoretical model and to any other potentially valid measures.

In this study, I propose to measure and compare the levels of OCB exhibited by teachers who receive performance pay incentives to those who receive just a standard scheduled salary. In that the incentive compensation provided to the teachers is the “treatment”, I do not anticipate that the control group of teachers who receive only straight salary will inadvertently be “treated”. I do however need to stay extremely wary that both the control and the experimental group may be receiving a “treatment” via their environments, i.e. their departments, buildings, districts and communities, that impact their OCB levels. In this study then, the dependent variable is more at risk of being compromised by extraneous events, timelines and settings than the independent variable.

I sense that the greatest risk to validity comes in the course of selecting and maintaining study’s subjects themselves. As we all know, people are strange and generally act weirdly, especially when it comes to professional motivations and behaviors. That said, as much as I attempt to control for natural and cultural characteristics that affect OCB, I will undoubtedly fail to identify groups of subjects that are absolutely equal in their predispositions. Working in my favor though, previous studies of OCB have shown that individual characteristics like personality have limited associations with OCB and so it has been suggested that OCB is a result of cognitive and rational judgments and not emotional ones. If it were the reverse, I would need to even more carefully control for subject affectivity.

The effect of time and the announcement of the study itself on my subjects is another obvious threat to the validity of my study. Since it is not practical to announce the study and collect all the necessary data at one time, I can presume that my presence and the notice of the study will operate as an intervention in and of itself. The observation of OCB may already be compromised by changes in teacher’s actions in between the time the study is announced and when data is collected. As I finalize and summarize my literature review, I intend to keep my eye open for discussions about how this specific threat to validity can be minimized. I imagine it is a threat not only for my study but most others out there as well.

Even as I manage these threats to validity through careful design of the measure, documentation of the environment, and the “proper care and feeding” of my subjects, data analysis may still yield results that prevent any meaningful inferences or recommendations from being drawn. Although that would be very unfortunate, as Dr. McMillan would have us understand the concept of validity, recognition that a decision or inference should not be drawn from the study may still be a valid conclusion.

Sunday, February 27, 2011

Beware the causes!

I have read two particular studies in the past few days, one for this class and another for a Guided Study in International Human Resources, in which the authors "point blank" warned against making assumptions about causality based on the data they collected. Both were designed to assess correlations between variables so I’ll first discuss the dangers of implied causation in correlational studies as they were highlighted by our textbook author. First, Dr. McMillan points out that even if a positive or negative or predictive relationship between two or more variables is proved, the causality of that relationship should not be assumed because a correlation is not designed to provide evidence about causal direction.

This particular problem was illustrated in an article about the factors that influence the work role adjustment of an expatriate manager in Japan. In his study, Black (1988) hypothesized that the level of family’s adjustment is correlated to the manager’s adjustment but he backed away from trying to assess causal direction because the family’s adjustment (or lack thereof) could be caused by the manager’s adjustment (or lack thereof) just as easily as the manager’s adjustment could be caused by the family’s adjustment.

The second study that mentioned causality as a limitation is one that I am reading for my research proposal, about the relationships between several individual and organizational characteristics and organizational citizenship behaviors (OCB) in schools. At the end of the article, the authors (Somech & Ron, 2007) suggest that since they used a cross-sectional survey for their study, they were not able to assess the extent of a causal relationship between the variables and the OCB that they otherwise were able to show were correlated.

The other warning that Dr. McMillan issues about inferring causation from correlational studies is more so a warning about taking actions based on observed correlations. Again, going back to the studies I’ve picked as examples, if one were to deploy 100% of available resources to address and improve the family’s adjustment to the international assignment when it is actually the manager’s issues that are causing the family’s disorder then the organization is misallocating its resources. And based on the OCB study, if it were determined that OCB precede the evidence of organizational characteristics, policies that focus resources on replicating the related organizational characteristics will be misaligned.

This assumption of causation is equally a problem in comparative studies. Even if a study shows that there are significant similarities or differences between groups when certain variables are assessed, it does not mean that the groups are similar or different because of the independent variable. To illustrate this, I’ll use an aspect of my own planned research proposal. If I were to execute a comparative study only and I determined that the OCB reported by teachers who receive merit pay incentives is lower than those teachers who do not receive merit pay, it would be very unwise to stop there with a conclusion that merit pay reduces teacher OCB. The lower OCB observation could very well be related to a different characteristic or variable that I have not even included in my study. My conclusion then should be to propose research that digs deeper into other causes or implications of OCB and how those might also interact with merit incentives.

Citations for the studies used as examples above are:

Black, J. S. (1988). Work role transitions: A study of American expatriate managers in Japan. Journal of International Business Studies, 19(2), pp. 277-294.

Somech, A., & Ifat, R. (2007). Promoting organizational citizenship behavior in schools: The impact of individual and organizational characteristics. Educational Administration Quarterly, 43(1), 38-66. doi:10.1177/0013161X06291254