star twitter facebook envelope linkedin instagram youtube alert-red alert home left-quote chevron hamburger minus plus search triangle x

Artificial Intelligence, Explained


By Jennifer Monahan

Many of us are familiar the way artificial intelligence (AI) is already integrated into our daily lives: Spotify recommends new songs that we love, Google Maps provides faster routes for our morning commute, or Alexa sounds an alarm to remind us when it's time to leave for an appointment. Each of these examples is an instance of AI in action, and we've become accustomed to their existence.

So why the current hype cycle around AI? What’s different now?

The most recent iterations of AI – called “generative” AI – can do things that look, sound, and feel eerily human.

Why It Matters

AI has the potential to transform various industries, from finance and education to transportation and healthcare. AI can automate repetitive tasks, improve decision-making processes, and enhance the accuracy and speed of data analysis.

While the potential benefits are enormous, AI presents significant ethical and societal concerns. Like any tool, AI can be used for good or harm. Carnegie Mellon University’s Block Center for Technology and Society was created to explore how technology can be leveraged for social good.

As of now, only a few technology super-companies have the capacity to create large-scale generative AI tools. The systems require massive amounts of both computing power and data. By default, a few people who lead these organizations are making decisions about the use of AI that will have widespread consequences for society. It behooves the rest of us to recognize the moment we’re in, and to engage in shaping the path forward.

Some Basic History and Definitions…What AI Is

Alan Turing, one of the founders of AI, suggested in 1950 that if a machine can have a conversation with a human and the human can’t distinguish whether they are conversing with another human or with a machine, the machine has demonstrated human intelligence.

Machine learning (ML) first entered the public consciousness in the 1950s, when television viewers watched a demonstration of Arthur Samuel’s Checkers program defeating its human opponent, Robert Nealy. For a long time, though, AI remained largely confined to the realm of tech geniuses and science fiction enthusiasts.

Those tech geniuses accomplished a number of groundbreaking achievements over the last seven decades, including:

A Timeline

Pictured above, left, Herbert Simon joined the CMU faculty in 1949 and helped create several of the University's departments and schools. Allen Newell, pictured right (circa 1970), earned a doctorate in Industrial Administration (1957) at the Carnegie Institute of Technology and later co-founded CMU's Computer Science Department.

  • In 1957, Frank Rosenblatt developed the Perceptron, an early artificial neural network that recognized patterns.

  • In 1965, Joseph Weizenbaum developed ELIZA, the first chatbot; the system used limited natural language processing.

  • 1960s and 70s: AI enters mainstream pop culture:
    • "2001: A Space Odyssey" premiered in movie theaters (1968).
    • C-3PO and R2-D2 are introduced to the world via "Star Wars: A New Hope" (1977).
    • Speak & Spell toy hits the shelves (1978).

  • 1974 - 1980: The first "AI winter" is a period of decreased funding and consequently slowed research in AI.

  • In 1981, the government of Japan allocated $850 million for the Fifth Generation Computer project; the goal was to create systems that could engage in conversation and reason like a human.

  • In 1984, NAVLab developed the first autonomous land vehicle.

  • The second AI winter occurred between 1987 - 1993. 

  • In 1997, Deep Blue beat world chess champion Gary Kasparov.

  • In 2011, IBM’s Watson defeated Ken Jennings on Jeopardy and Apple added Siri to its iPhones.

Common Terms

The terminology around AI can be intimidating. Here’s a glossary of key terms you’ll often hear when people talk about AI.

Algorithm: a set of rules or instructions that tell a machine what to do with the data input into the system.

Deep Learning: a method of machine learning that lets computers learn in a way that mimics a human brain, by analyzing lots of information and classifying that information into categories. Deep learning relies on a neural network.

Hallucination: a situation where an AI system produces fabricated, nonsensical, or inaccurate information. The wrong information is presented with confidence, which can make it difficult for the human user to know whether the answer is reliable.

Large Language Model (LLM): a computer program that has been trained on massive amounts of text data such as books, articles, website content, etc. An LLM is designed to understand and generate human-like text based on the patterns and information it has learned from its training. LLMs use natural language processing (NLP) techniques to learn to recognize patterns and identify relationships between words. Understanding those relationships helps LLMs generate responses that sound human—it’s the type of model that powers AI chatbots such as ChatGPT.

Machine Learning (ML): a type of artificial intelligence that uses algorithms which allow machines to learn and adapt from evidence (often historical data), without being explicitly programmed to learn that particular thing.

Natural Language Processing (NLP): the ability of machines to use algorithms to analyze large quantities of text, allowing the machines to simulate human conversation and to understand and work with human language.

Neural Network: a deep learning technique that loosely mimics the structure of a human brain. Just as the brain has interconnected neurons, a neural network has tiny interconnected nodes that work together to process information. Neural networks improve with feedback and training.

Token: the building block of text that a chatbot uses to process and generate a response. For example, the sentence "How are you today?" might be separated into the following tokens: ["How," "are," "you," "today," "?"]. Tokenization helps the chatbot understand the structure and meaning of the input.

AI refers to the ability of machines and computers to perform tasks that would normally require human intelligence. These tasks include things like recognizing patterns and making predictions. Ultimately, that’s not magic; it’s math.

To understand what’s going on with AI today, it’s helpful to think of AI in phases of development. Early AI systems were machines that received an input – the data they were fed by humans - and then produced a recommendation. That response is based on the way the system was trained, and the algorithms (the math!) that tell the system what to do with the data. It’s computers that can play checkers or chess. It’s Netflix knowing that you loved "Karate Kid" and suggesting that you watch "Cobra Kai."

How Generative AI Works

Generative AI is a step forward in the development phase. Instead of just reacting to data input, the system takes in data and then uses predictive algorithms (a set of step-by-step instructions) to create original content. In the case of a large language model (LLM), that content can take the form of original poems, songs, screenplays, and the like produced by AI chatbots such as ChatGPT and Google Bard. The “large” in LLMs indicates that the language model is trained on a massive quantity of data. Although the outcome makes it seem like the computer is engaged in creative expression, the system is actually just predicting a set of tokens and then selecting one.

“The model is just predicting the next word. It doesn't understand,” explains Rayid Ghani, professor of machine learning at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy. “But as a user playing around with it, it seems to have amazing capabilities, while having very large blind spots.” 

Models like ChatGPT are programmed to select the next token, or word, but not necessarily the most commonly used next word. Chatbots might choose – for example –  the fourth most common word in one attempt. When the user submits the exact same prompt to the chatbot the next time, the chatbot could randomly select the second most common word to complete the statement. That’s why we humans can ask a chatbot the same question and receive slightly different responses each time.

Tools like Copilot and ChatGPT use that token process to write computer code. Though not always perfect, the initial consensus in the tech industry suggests that these tools can save coders hours of tedious work.

Text-to-image models like DALL-E and Stable Diffusion work similarly. The program is trained on lots and lots of pictures and their corresponding descriptions. It learns to recognize patterns and understand the relationships between words and visual elements. So when you give it a prompt that describes an image, it uses those patterns and relationships to generate a new image that fits the description. As a result, these models can create never-before-seen art. A prompt for “Carnegie Mellon University Scotty Dog dancing, in the style of pointillism” produced this fun gem:

Scottie dog image created by Stable Diffusion, an AI tool to create images.

Philosophers, artists, and creative types are actively debating whether these processes constitute creativity or plagiarism.

What AI Is Not

Despite the now famous creepy conversation between New York Times writer Kevin Roose and Microsoft’s Bing chatbot, we have not yet entered the phase of sentient AI – or artificial general intelligence (AGI). AGI is still a theoretical idea. Unlike generative AI, which seems to be able to do some of the things humans do, AGI systems would actually mimic or surpass human intelligence. Machines would become self-aware and have consciousness. And if you buy into the premise of movies like "Terminator" or "The Matrix," things go south for the human race rather quickly after that. To be clear, that’s not where we are today.

AI is also not infallible. Large language models like Bard and ChatGPT have an interesting flaw – sometimes they hallucinate. As in, a user enters a prompt and the system makes up an answer that’s not true in some way. The system might produce an intelligent-sounding essay explaining photosynthesis, and cite as its source a scholarly research paper that doesn’t actually exist. Sometimes the answer is just inaccurate. To complicate matters, the information is presented with confidence and authority; it looks and sounds legitimate.

“You can imagine a physician prompting an AI chatbot to list drugs that have recently been found useful for a particular disease,” explained Ghani. “The model is designed to produce a response that sounds realistic, but it’s not designed to produce factually correct information. It would produce a list of drugs. They might be real; they might be made up. While a physician may have the training and background to separate real from fake, a patient may not be able to do so if given access to such a tool.” You can see the problem.

AI is not inherently fair and just. LLMs are trained on large quantities of data, much of which is scraped from the Internet. That data includes reliable sources right alongside the hate-speech and other sewage that lives in the depths of social media platforms. Technologists have put in some protections – asking ChatGPT to tell a sexist joke elicits the following response:

I'm sorry, but I'm programmed to follow ethical guidelines, and that includes not promoting or sharing any form of sexist, offensive, or discriminatory content. I'm here to help answer questions, engage in meaningful conversations, and provide useful information. If you have any non-offensive questions or topics you'd like to discuss, please feel free to ask.

Humans employing more creative prompts can often circumvent the protections in the AI chatbots. And sometimes the AI system itself is biased, as in the case of hiring tools that discriminate against women or facial recognition software that doesn’t recognize people of color. Bias inherent in an AI model has the potential to exacerbate existing injustice.

Moving Forward

AI is changing the way we live, work, and interact with machines. When all that’s at stake is our Spotify playlist or which Netflix show we watch next, understanding how AI works is probably not important for a large percentage of the population. But with the advent of generative AI into mainstream consciousness, it’s time for all of us to start paying attention and to decide what kind of society we want to live in.

Interested in how machine learning and artificial intelligence will shape the future?

Heinz College empowers data scientists via our Master of Science in Business Intelligence and Data Analytics and Public Policy and Data Analytics programs. The Block Center focuses on how emerging technologies will alter the future of work, how AI and analytics can be harnessed for social good, and how innovation in these spaces can be more inclusive and generate targeted, relevant solutions that reduce inequality and improve quality of life for all.