Natural Language Processing, Computer Vision Systems, and Robotics

Other important AI techniques include natural language processing, computer vision systems, and robotics.

1. Natural Language Processing

Human language is not always precise. It is often ambiguous, and meanings of words can depend on complex variables such as slang, regional dialects, and so­cial context. Natural language processing (NLP) makes it possible for a com­puter to understand and analyze natural language-language that human beings instinctively use, not language specially formatted to be understood by computers. NLP algorithms are typically based on machine learning, including deep learning, which can learn how to identify a speaker’s intent from many examples. Akershus University Hospital, described in the chapter-opening case, used NLP and IBM Watson Explorer to sift through thousands of medical records with unstructured textual data expressed in everyday language like natural speech. The algorithms could read text on a medical record and interpret its meaning. You can also see nat­ural language processing at work in leading search engines such as Google, spam filtering systems, and text mining sentiment analysis (discussed in Chapter 6).

Tokyo-based Mizuho Bank employs advanced speech recognition technology, IBM® Watson™ content analytics software, and a cloud services infrastructure to improve contact center agents’ interactions with customers. After converting the customer’s speech to textual data, the solution applies natural language process­ing algorithms based on machine learning analysis of interactions with thousands of customers. The system learns more and more from each customer interaction so that it can eventually infer the customer’s specific needs or goals at each point of the conversation. It then formulates the optimal response, which is delivered in real time as a prompt on the agent’s screen. By helping contact center agents more efficiently sense and respond to customer needs, this solution reduced the average duration of customer interactions by more than 6 percent (IBM, 2018).

2. Computer Vision Systems

Computer vision systems deal with how computers can emulate the human visual system to view and extract information from real-world images. Such sys­tems incorporate image processing, pattern recognition, and image understanding.

An example is Facebook’s facial recognition tool called DeepFace, which is nearly as accurate as the human brain in recognizing a face. DeepFace will help Facebook improve the accuracy of Facebook’s existing facial recognition capabilities to ensure that every photo of a Facebook user is connected to that person’s Facebook account. Computer vision systems are also used in autonomous vehicles such as drones and self-driving cars (see the chapter-ending case), industrial machine vision systems (e.g., inspecting bottles), military applications, and robotic tools.

In 2017, the National Basketball Association (NBA) decided to allow spon­sors to place small logo patches representing their brands on player uniforms. This advertising investment turned out to be worth its multi-million-dollar cost. According to GumGum, an AI company focusing on computer vision technology, the image placed by The Goodyear Tire & Rubber Co. on the uni­forms of the Cleveland Cavaliers generated $3.4 million in value from social media exposure alone during the first half of the baseball season. GumGum develops algorithms that enable computers to identify what’s happening in imagery. GumGum used computer vision technology to thoroughly analyze broadcast and social media content for placement, exposure, and duration in­volving Goodyear images that appeared in online or in TV-generated NBA content. Instead of humans trying to monitor the number of times a logo ap­peared on a screen, GumGum’s vision technology tracks and reports the data (Albertson, 2018).

3. Robotics

Robotics deals with the design, construction, operation, and use of mov­able machines that can substitute for humans along with computer systems for their control, sensory feedback, and information processing. Robots can­not substitute entirely for people but are programmed to perform a spe­cific series of actions automatically. They are often are used in dangerous environments (such as bomb detection and deactivation), manufacturing processes, military operations (drones), and medical procedures (surgical robots). Many employees now worry whether robots will replace people en­tirely and take away their jobs (see the Chapter 4 Interactive Session on Organizations).

The most widespread use of robotic technology has been in manufactur­ing. For example, automobile assembly lines employ robots to do heavy lifting, welding, applying glue, and painting. People still do most of the final assembly of cars, especially when installing small parts or wiring that needs to be guided into place. A Renault SA plant in Cleon, France, now uses robots from Universal Robots AS of Denmark to drive screws into engines, especially those that go into places people find hard to access. The robots verify that parts are properly fastened and check to make sure the correct part is being used. The Renault robots are also capable of working in proximity to people and slowing down or stopping to avoid hurting them.

Source: Laudon Kenneth C., Laudon Jane Price (2020), Management Information Systems: Managing the Digital Firm, Pearson; 16th edition.

One thought on “Natural Language Processing, Computer Vision Systems, and Robotics

Leave a Reply

Your email address will not be published. Required fields are marked *