Shared Assumptions About Measuring Results and Correction Mechanisms

All groups and organizations need to know how they are doing against their goals and periodically need to check to determine whether they are performing in line with their mission. This process involves three areas in which the group needs to achieve consensus leading to cultural dimensions that later drop out of awareness and become basic assumptions. Consensus must be achieved on what to measure, how to measure it, and what to do when corrections are needed. The cultural elements that form around each of these issues often become the primary focus for what newcomers to the organization will be concerned about because such measurements inevita­bly become linked to how each employee is doing his or her job.

1. What to Measure

Once the group is performing, it must have consensus on how to judge its own performance to know what kind of remedial action to take when things do not go as expected. For example, we have noted that early in DEC ’s history, the evaluation of engineering projects hinged on whether certain key engineers in the company “liked” the product. The company assumed that internal acceptance was an acceptable surrogate for exter­nal acceptance. At the same time, if several competing engineering groups each liked what they were designing, the criterion shifted to “letting the market decide.” These criteria could work in tandem as long as there were enough resources to support all the projects because DEC was growing at a rapid rate.

At the Wellmade flute company, evaluation was done at each node in the production process, so that by the time an instrument reached the end of the line, it was likely to pass inspection and to be acceptable to the artist. If a craftsman at a given position did not like what he felt or saw or heard, he simply passed the instrument back to the preceding craftsman; the norm was that it would be reworked without resentment. Each person trusted the person in the next position (Cook, personal communication, March 10, 1992).

Cook also found a similar process in a French brandy company. Not only was each step evaluated by an expert, but the ultimate role of “taster”—the person, who makes the final determination of when a batch is ready—could only be held by a male son of the previous taster. In this company, the last taster had no sons. Rather than pass the role on to the eldest daughter, it was passed on to a nephew, on the assumption that female taste preferences were in some fundamental way different from male taste preferences!

I was involved at one point in the 1980s with the exploration and pro­duction division management of the U.S. Shell Oil Company. My consult­ing assignment was to help them do a cultural analysis to develop better “measurements” of the division’s performance. As we collectively began to examine the artifacts and espoused beliefs and values of this group, it immediately became apparent that the exploration group and the produc­tion group had completely different concepts of how they wanted to be measured.

The exploration group wanted to be measured on finding evidence of oil, which they felt should be determined on a statistical basis over a long period of time because most wells proved to be “dry.” In contrast, the pro­duction group, which was charged with safely removing oil from an active well, wanted to be measured on a short-term basis in terms of safe and effi­cient “production.” For the exploration group, the risk was in not finding anything over a long period of time; for the production group the risk was of an accident or fire, which could occur at any moment. In the end, both groups wanted to contribute to the financial performance of the company, so both the cost of exploration and the cost of safe production had to be factored in, but neither group wanted to be measured by a general criterion that did not fit its work.

Some companies teach their executives to trust their own judgment as a basis for decisions; others teach them to check with their bosses; still others teach them not to trust results unless they are based on hard data, such as test markets or at least market research; and still others teach them to rely on staff experts. If members of the group hold widely divergent con­cepts of what to look for and how to evaluate results, they cannot decide when and how to take remedial action.

For example, senior managers within companies often hold different views of how to assess financial performance—debt/equity ratios, return on sales, return on investment, stock price, credit rating, and other indicators could all be used. If senior management cannot agree on which indicator to pay primary attention to, they cannot decide how well they are doing and what corrective action, if any, they need to take.

Debates can occur over whether financial criteria should override crite­ria such as customer satisfaction, market share, or employee morale. These debates are complicated by potential disagreements on the correct time horizons to use in making evaluations—daily, monthly, quarterly, annually, or what? Even though the information systems may be very precise, such precision does not guarantee consensus on how to evaluate information.

The potential complexity of achieving consensus on measurement cri­teria was illustrated in an international refugee organization. Field workers measured themselves by the number of refugees processed, but senior man­agement paid more attention to how favorable the attitudes of host govern­ments were because those governments financed the organization through their contributions. Senior management therefore checked every decision that was to be made about refugees with virtually every other department and several layers of management to ensure that the decision would not offend any of the supporting governments. However, this process markedly slowed the decision making and often led to “lowest common denomina­tor” conservative decisions. This, in turn, led to great consternation on the part of field workers, who felt that while management was dawdling to get everyone’s approval, they were dealing with crisis situations in the field in which a slowdown might mean death for significant numbers of refu­gees. They perceived top management to be hopelessly mired in what they considered to be simply bureaucratic tangles, and they did not understand the caution that top management felt it had to exercise toward sponsoring governments.

The lack of agreement across the hierarchy on how to judge success illustrates the importance of subcultures in organizations. Whereas the field workers tended to think of the core mission as helping the survival of refu­gees, senior management was clearly more concerned with the survival of the total organization, which, in its view, depended on how it related to the United Nations and to the host governments. Senior management had to decide whether to indoctrinate field workers more effectively on what the core organizational survival problem really was or to live with the internal conflict that the lack of consensus seemed to generate. On the other hand, the younger, idealistic field workers could well argue (and did) that to sur­vive as an organization made no sense if the needs of refugees were not met. This organization, then, had conflicting cultural assumptions or conflicting subcultures in that the headquarters and field each had consensus, but there was an absence of a total organizational consensus on mission, goals, and means.

In Ciba-Geigy a comparable subculture issue arose in evaluating the performance of different divisions. The high’ performing division heads chose to compare themselves internally to the low-performing divisions and were, therefore, complacent and satisfied with their performance. Senior management, on other hand, chose to compare divisions to their external competitors in the same product/market space and found that some were underperforming by this criterion. For example, the pharmaceutical divi­sion outperformed the other chemical divisions but did poorly relative to other pharmaceutical companies. Yet the corporate assumption that we are “one family” made it hard to convince the pharmaceutical division manag­ers to accept the tougher “external” standards.

2. Consensus on Means of Measurement

Consensus must be achieved both on the criteria and on the means by which information is to be gathered. For example, in DEC ’s early years, there developed a very open communication system, built around high levels of acquaintance and trust among the members of the organization. This system was supported by a computerized e – mail network, constant telephone communications, frequent visits, formal and informal surveys and sensing meetings, and two- to three-day committee meetings in set­tings away from the office. Individual managers developed their own sys­tems of measurement and were trusted to report progress accurately. DEC operated on the powerful shared assumption that information and truth were the lifeblood of the organization, and the company built many formal and informal mechanisms to ensure a high rate of internal communica­tion, such as the rule in the early years that engineer ’s offices were not to have doors. They were to be easily accessible to each other physically and through the world-wide electronic network.

Ken Olsen “measured” things by walking around, talking to people at all levels of the organization, sensing morale from the climate he encountered as he walked around. The informal measures were much more important initially than formal financial controls, and consensus developed around the assumption that “we will always be open and truthful with each other.” -n contrast, in Ciba-Geigy there was a tightly structured reporting system, which involved weekly telephone calls, monthly reports to the financial control organization in headquarters, semi-annual visits to every department by headquarters teams, and formal meetings and seminars at which policy was communicated downward in the organization. In Ciba- Geigy the main assumption appeared to be that information flowed pri­marily in designated channels, and informal systems were to be avoided because they could be unreliable. Subculture issues came up around the assessment of scientific information, especially about drugs. The company had laboratories both in the United States and in Europe, and information was assumed to be equally valid in both sets of labs. Yet scientists often reported that they did not entirely trust the data from the other organiza­tion because they were perceived to be using somewhat different standards.

In summary, the methods an organization decides to use to measure its own activities and accomplishments—the criteria it chooses and the information system it develops to measure itself—become central elements of its culture as consensus develops around these issues. If consensus fails to develop, and strong subcultures form around different assumptions, the organization will find itself in conflicts that can potentially undermine its ability to cope with the external environment.

Source: Schein Edgar H. (2010), Organizational Culture and Leadership, Jossey-Bass; 4th edition.

2 thoughts on “Shared Assumptions About Measuring Results and Correction Mechanisms

  1. Columbus Desorcy says:

    Thanks a lot for sharing this with all of us you actually know what you are talking about! Bookmarked. Kindly also visit my web site =). We could have a link exchange contract between us!

Leave a Reply

Your email address will not be published. Required fields are marked *