Skip to content Skip to sidebar Skip to footer

Why Is Human Error Missing in Digital Art Work and Wwhy Is That a Problem?

  • Journal List
  • BMJ
  • five.320(7237); 2000 Mar 18
  • PMC1117770

BMJ. 2000 Mar eighteen; 320(7237): 768–770.

Human fault: models and management

The human being fault trouble can exist viewed in two ways: the person approach and the system approach. Each has its model of mistake causation and each model gives rise to quite dissimilar philosophies of error management. Agreement these differences has important practical implications for coping with the ever present risk of mishaps in clinical exercise.

Summary points

  • Two approaches to the problem of human being fallibility exist: the person and the system approaches

  • The person approach focuses on the errors of individuals, blaming them for forgetfulness, inattention, or moral weakness

  • The system approach concentrates on the conditions nether which individuals piece of work and tries to build defences to avert errors or mitigate their effects

  • High reliability organisations—which have less than their off-white share of accidents—recognise that homo variability is a force to harness in averting errors, but they work difficult to focus that variability and are constantly preoccupied with the possibility of failure

Person arroyo

The longstanding and widespread tradition of the person arroyo focuses on the unsafe acts—errors and procedural violations—of people at the sharp stop: nurses, physicians, surgeons, anaesthetists, pharmacists, and the like. Information technology views these unsafe acts as arising primarily from abnormal mental processes such equally forgetfulness, inattention, poor motivation, carelessness, negligence, and recklessness. Naturally enough, the associated countermeasures are directed mainly at reducing unwanted variability in human behaviour. These methods include poster campaigns that entreatment to people'due south sense of fear, writing another procedure (or adding to existing ones), disciplinary measures, threat of litigation, retraining, naming, blaming, and shaming. Followers of this approach tend to care for errors as moral issues, bold that bad things happen to bad people—what psychologists take called thejust earth hypothesis.ane

Organization arroyo

The bones premise in the organization approach is that humans are fallible and errors are to exist expected, fifty-fifty in the all-time organisations. Errors are seen as consequences rather than causes, having their origins not so much in the perversity of human nature every bit in "upstream" systemic factors. These include recurrent error traps in the workplace and the organisational processes that requite rise to them. Countermeasures are based on the assumption that though we cannot change the human condition, we can change the conditions nether which humans work. A cardinal idea is that of system defences. All chancy technologies possess barriers and safeguards. When an adverse outcome occurs, the important outcome is not who blundered, but how and why the defences failed.

Evaluating the person approach

The person approach remains the dominant tradition in medicine, as elsewhere. From some perspectives information technology has much to commend it. Blaming individuals is emotionally more satisfying than targeting institutions. People are viewed as free agents capable of choosing between safe and unsafe modes of behaviour. If something goes wrong, it seems obvious that an private (or group of individuals) must have been responsible. Seeking every bit far as possible to uncouple a person'south unsafe acts from any institutional responsibility is clearly in the interests of managers. Information technology is also legally more convenient, at to the lowest degree in United kingdom of great britain and northern ireland.

Nevertheless, the person approach has serious shortcomings and is sick suited to the medical domain. Indeed, connected adherence to this approach is probable to thwart the evolution of safer healthcare institutions.

Although some unsafe acts in whatsoever sphere are egregious, the vast majority are non. In aviation maintenance—a hands-on activeness similar to medical exercise in many respects—some xc% of quality lapses were judged every bit blameless.two Effective risk direction depends crucially on establishing a reporting culture.3 Without a detailed analysis of mishaps, incidents, near misses, and "free lessons," nosotros accept no way of uncovering recurrent error traps or of knowing where the "border" is until nosotros fall over it. The complete absence of such a reporting culture within the Soviet Marriage contributed crucially to the Chernobyl disaster.iv Trust is a key element of a reporting culture and this, in plow, requires the existence of a simply culture—one possessing a collective understanding of where the line should exist drawn between blameless and blameworthy deportment.5 Engineering a just civilization is an essential early step in creating a safe civilization.

Another serious weakness of the person approach is that past focusing on the individual origins of mistake it isolates unsafe acts from their system context. As a result, two important features of homo error tend to exist overlooked. Firstly, information technology is often the best people who brand the worst mistakes—error is not the monopoly of an unfortunate few. Secondly, far from being random, mishaps tend to fall into recurrent patterns. The same set of circumstances can provoke similar errors, regardless of the people involved. The pursuit of greater safe is seriously impeded by an approach that does not seek out and remove the error provoking properties inside the arrangement at large.

The Swiss cheese model of system accidents

Defences, barriers, and safeguards occupy a key position in the system arroyo. High technology systems have many defensive layers: some are engineered (alarms, physical barriers, automatic shutdowns, etc), others rely on people (surgeons, anaesthetists, pilots, control room operators, etc), and yet others depend on procedures and administrative controls. Their function is to protect potential victims and assets from local hazards. More often than not they exercise this very effectively, simply in that location are always weaknesses.

In an platonic world each defensive layer would be intact. In reality, yet, they are more like slices of Swiss cheese, having many holes—though different in the cheese, these holes are continually opening, shutting, and shifting their location. The presence of holes in any one "piece" does not commonly crusade a bad event. Normally, this tin can happen only when the holes in many layers momentarily line upward to permit a trajectory of blow opportunity—bringing hazards into damaging contact with victims (figure).

The holes in the defences arise for two reasons: active failures and latent conditions. Nearly all agin events involve a combination of these 2 sets of factors.

Active failures are the unsafe acts committed by people who are in straight contact with the patient or system. They take a variety of forms: slips, lapses, fumbles, mistakes, and procedural violations.six Active failures have a direct and usually shortlived bear on on the integrity of the defences. At Chernobyl, for example, the operators wrongly violated found procedures and switched off successive safety systems, thus creating the firsthand trigger for the catastrophic explosion in the cadre. Followers of the person approach often look no farther for the causes of an adverse event once they accept identified these proximal dangerous acts. But, as discussed below, well-nigh all such acts take a causal history that extends dorsum in time and upwards through the levels of the system.

Latent weather are the inevitable "resident pathogens" within the system. They arise from decisions fabricated past designers, builders, procedure writers, and top level management. Such decisions may be mistaken, but they demand not be. All such strategic decisions have the potential for introducing pathogens into the system. Latent conditions take ii kinds of adverse effect: they can translate into error provoking weather inside the local workplace (for case, fourth dimension pressure, understaffing, inadequate equipment, fatigue, and inexperience) and they can create longlasting holes or weaknesses in the defences (untrustworthy alarms and indicators, unworkable procedures, design and construction deficiencies, etc). Latent conditions—every bit the term suggests—may lie dormant within the system for many years before they combine with active failures and local triggers to create an accident opportunity. Unlike agile failures, whose specific forms are ofttimes difficult to foresee, latent atmospheric condition tin exist identified and remedied before an adverse effect occurs. Understanding this leads to proactive rather than reactive risk management.

We cannot change the human condition, but we can alter the weather condition under which humans work

To utilize some other analogy: active failures are like mosquitoes. They can be swatted one past one, but they notwithstanding go on coming. The best remedies are to create more effective defences and to drain the swamps in which they breed. The swamps, in this case, are the ever nowadays latent conditions.

Error management

Over the past decade researchers into human factors accept been increasingly concerned with developing the tools for managing dangerous acts. Error management has two components: limiting the incidence of dangerous errors and—since this will never be wholly effective—creating systems that are better able to tolerate the occurrence of errors and contain their damaging effects. Whereas followers of the person approach direct most of their management resources at trying to make individuals less fallible or wayward, adherents of the organization approach strive for a comprehensive management programme aimed at several different targets: the person, the team, the task, the workplace, and the institution equally a whole.3

High reliability organisations—systems operating in hazardous conditions that have fewer than their fair share of agin events—offer important models for what constitutes a resilient system. Such a arrangement has intrinsic "safety health"; it is able to withstand its operational dangers and yet all the same achieve its objectives.

Some paradoxes of high reliability

But as medicine understands more about illness than health, so the safety sciences know more near what causes adverse events than near how they tin all-time exist avoided. Over the past fifteen years or then, a group of social scientists based mainly at Berkeley and the University of Michigan has sought to redress this imbalance by studying safety successes in organisations rather than their exceptional but more conspicuous failures.seven ,8 These success stories involved nuclear aircraft carriers, air traffic control systems, and nuclear power plants (box). Although such high reliability organisations may seem remote from clinical practise, some of their defining cultural characteristics could be imported into the medical domain.

Most managers of traditional systems aspect homo unreliability to unwanted variability and strive to eliminate it as far as possible. In high reliability organisations, on the other mitt, it is recognised that human variability in the shape of compensations and adaptations to changing events represents one of the system's nigh important safeguards. Reliability is "a dynamic not-effect."7 It is dynamic because safety is preserved by timely human adjustments; it is a non-event because successful outcomes rarely call attending to themselves.

Loftier reliability organisations can reconfigure themselves to adapt local circumstances. In their routine fashion, they are controlled in the conventional hierarchical manner. But in high tempo or emergency situations, command shifts to the experts on the spot—as it often does in the medical domain. The arrangement reverts seamlessly to the routine control mode once the crisis has passed. Paradoxically, this flexibility arises in role from a military tradition—fifty-fifty civilian high reliability organisations take a big proportion of ex-military staff. Armed forces organisations tend to define their goals in an unambiguous way and, for these bursts of semiautonomous activity to be successful, it is essential that all the participants clearly understand and share these aspirations. Although loftier reliability organisations expect and encourage variability of man action, they as well work very hard to maintain a consistent mindset of intelligent wariness.8

Blaming individuals is emotionally more than satisfying than targeting institutions.

Perhaps the most of import distinguishing characteristic of high reliability organisations is their collective preoccupation with the possibility of failure. They expect to make errors and train their workforce to recognise and recover them. They continually rehearse familiar scenarios of failure and strive hard to imagine novel ones. Instead of isolating failures, they generalise them. Instead of making local repairs, they look for system reforms.

High reliability organisations

So far, three types of loftier reliability organisations accept been investigated: Usa Navy nuclear aircraft carriers, nuclear power plants, and air traffic control centres. The challenges facing these organisations are twofold:

  • Managing circuitous, demanding technologies then as to avoid major failures that could cripple or even destroy the organisation concerned

  • Maintaining the capacity for meeting periods of very loftier pinnacle demand, whenever these occur.

The organisations studiedvii ,8 had these defining characteristics:

  • They were complex, internally dynamic, and, intermittently, intensely interactive

  • They performed exacting tasks nether considerable time pressure

  • They had carried out these enervating activities with low incident rates and an almost complete absence of catastrophic failures over several years.

Although, on the face of it, these organisations are far removed from the medical domain, they share important characteristics with healthcare institutions. The lessons to be learnt from these organisations are clearly relevant for those who manage and operate healthcare institutions.

An external file that holds a picture, illustration, etc.  Object name is reaj26ja.f2.jpg

US NAVY

Conclusions

High reliability organisations are the prime examples of the system arroyo. They anticipate the worst and equip themselves to deal with information technology at all levels of the organisation. Information technology is hard, even unnatural, for individuals to remain chronically uneasy, and then their organisational culture takes on a profound significance. Individuals may forget to exist afraid, but the culture of a loftier reliability organisation provides them with both the reminders and the tools to assist them retrieve. For these organisations, the pursuit of safety is not so much most preventing isolated failures, either human or technical, as about making the organization every bit robust as is practicable in the face of its human and operational hazards. High reliability organisations are non immune to adverse events, only they have learnt the knack of converting these occasional setbacks into enhanced resilience of the organisation.

An external file that holds a picture, illustration, etc.  Object name is reaj26ja.f1.jpg

The Swiss cheese model of how defences, barriers, and safeguards may be penetrated past an accident trajectory

Footnotes

  Competing interests: None declared.

References

i. Lerner MJ. The desire for justice and reactions to victims. In: McCauley J, Berkowitz L, editors. Altruism and helping behavior. New York: Academic Press; 1970. [Google Scholar]

2. Marx D. Discipline: the part of dominion violations. Basis Effects. 1997;2:1–4. [Google Scholar]

3. Reason J. Managing the risks of organizational accidents. Aldershot: Ashgate; 1997. [Google Scholar]

4. Medvedev One thousand. The truth almost Chernobyl. New York: Bones Books; 1991. [Google Scholar]

5. Marx D. Maintenance mistake causation. Washington, DC: Federal Aviation Authority Office of Aviation Medicine; 1999. [Google Scholar]

6. Reason J. Human error. New York: Cambridge University Press; 1990. [Google Scholar]

7. Weick KE. Organizational culture every bit a source of high reliability. Calif Direction Rev. 1987;29:112–127. [Google Scholar]

eight. Weick KE, Sutcliffe KM, Obstfeld D. Organizing for high reliability: processes of collective mindfulness. Res Organizational Behav. 1999;21:23–81. [Google Scholar]


Manufactures from The BMJ are provided here courtesy of BMJ Publishing Group


gibbsprome1956.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1117770/

Post a Comment for "Why Is Human Error Missing in Digital Art Work and Wwhy Is That a Problem?"