System Quality: Data Quality and System Errors

White Christmas turned into a blackout for millions of Netflix customers and social network users on December 24, 2012. The blackout was caused by the failure of Amazon’s cloud computing service (AWS), which provides storage and computing power for many websites and services, including Netflix. The loss of service lasted for a day. Amazon blamed it on elastic load balancing, a software program that balances the loads on all its cloud servers to prevent overload. Amazon’s cloud computing services have had several subsequent outages, al­though none as long-lasting as the Christmas Eve outage. In September 2016, AWS experienced a five-hour outage. Outages at cloud computing services are rare but recur. These outages have called into question the reliability and qual­ity of cloud services. Are these outages acceptable?

The debate over liability and accountability for unintentional consequences of system use raises a related but independent moral dimension: What is an ac­ceptable, technologically feasible level of system quality? At what point should system managers say, “Stop testing, we’ve done all we can to perfect this soft­ware. Ship it!” Individuals and organizations may be held responsible for avoid­able and foreseeable consequences, which they have a duty to perceive and correct. The gray area is that some system errors are foreseeable and correct­able only at very great expense, expense so great that pursuing this level of per­fection is not feasible economically—no one could afford the product.

For example, although software companies try to debug their products be­fore releasing them to the marketplace, they knowingly ship buggy products because the time and cost of fixing all minor errors would prevent these prod­ucts from ever being released. What if the product was not offered on the mar­ketplace? Would social welfare as a whole falter and perhaps even decline? Carrying this further, just what is the responsibility of a producer of computer services—should it withdraw the product that can never be perfect, warn the user, or forget about the risk (let the buyer beware)?

Three principal sources of poor system performance are (1) software bugs and errors, (2) hardware or facility failures caused by natural or other causes, and (3) poor input data quality. The Learning Track discusses why zero defects in software code of any complexity cannot be achieved and why the seriousness of remaining bugs cannot be estimated. Hence, there is a tech­nological barrier to perfect software, and users must be aware of the potential for catastrophic failure. The software industry has not yet arrived at testing standards for producing software of acceptable but imperfect performance.

Although software bugs and facility catastrophes are likely to be widely reported in the press, by far the most common source of business system failure is data qual­ity. Few companies routinely measure the quality of their data, but individual organizations report data error rates ranging from 0.5 to 30 percent.

Source: Laudon Kenneth C., Laudon Jane Price (2020), Management Information Systems: Managing the Digital Firm, Pearson; 16th edition.

1 thoughts on “System Quality: Data Quality and System Errors

  1. leased says:

    I’m noѡ not sure ԝhere you are getting уour info,
    but great topic. І needs to spend a while studying much more
    or understanding more. Thank you for fantastic information I waѕ
    searching for this information for my mission.

Leave a Reply

Your email address will not be published. Required fields are marked *