Is It Possible for Computers to Learn Common Sense?

Key Takeaways
- Common sense is the ability to understand and react to everyday situations without overanalyzing. It is acquired through life experiences and observations, as well as societal and cultural norms.
- Computers struggle with common sense because they lack real-world experiences and the ability to adapt to new contexts. They also struggle with unspoken rules and assumptions that humans intuitively understand.
- Researchers are exploring different approaches, such as building extensive knowledge bases, crowdsourcing common sense, and teaching AI through simulated worlds, to train computers in acquiring common sense. Progress has been made, but there is still work to be done.
Common sense. We all think we have it. But what exactly is it? Can computers or artificial intelligent systems ever truly acquire it?
What Is Common Sense, and How Do Humans Acquire It?
Common sense is the basic ability to perceive, understand, and judge things that most people are expected to have. It’s the collection of facts, information, and rules of thumb that we accumulate through life experiences and observations. Common sense allows us to efficiently process and react to everyday situations without analyzing them too deeply.
Humans begin acquiring common sense early in childhood. As babies, we start learning cause-and-effect relationships—like crying leads to being fed or changed. Through repeated experiences, we gain practical knowledge about the world. For example, touching a hot stove results in getting burned. So we learn not to touch hot surfaces.
As children, we continue expanding our common sense through trial and error and observing and interacting with family members. For instance, we realize that clothes must be washed regularly, you shouldn’t talk with your mouth full, and knocking over your milk glass leads to a mess. Parents, siblings, teachers, and other adults correct us when we violate societal norms and expectations. Over time, these lessons are ingrained as basic common sense.
In addition to personal experiences, common sense is shaped by broader societal and cultural norms. What may be common sense in one culture (like taking off shoes when entering a home) may not be so in another culture.
Our common sense adapts as we mature and are exposed to more people and environments. So, a child growing up in a small town gains basic common sense about life in that setting. An adult moving to a large metropolitan city has to adjust their common sense to fit the new surroundings.
Common sense continues evolving as we have new experiences throughout our lives.
Why Is Common Sense Challenging for Computers?
There are a few reasons why common sense is hard to program.
For one thing, humans learn common sense gradually over years of experiencing the world. We try things out, see what works and what doesn’t, and remember the lessons. Computers don’t have those kinds of real-world experiences to draw from. They only know what humans explicitly tell them.
For example, I asked ChatGPT (GPT 3.5) this question:
Janet runs a laundry business. She washes clothes for customers and hangs them outside on clotheslines to dry in the sun. One day, Janet washed five shirts and hung them on the clotheslines in the morning. It took the shirts five hours to dry. How long will it take to dry 30 shirts?
It turned out with this response:
Another issue is that common sense depends on context. If a computer only has specific rules programmed in, it can’t adapt them to new contexts the way humans intuitively can.
For example, say you taught a computer what to do if it starts raining when it’s outside. Seems straightforward, right? But then what if instead of rain, it’s a sprinkler that turns on? Or what if it’s inside a grocery store, and the pipes start leaking water from the ceiling? We’d instantly know how to handle those variations, but a computer would blindly follow its “when raining while outside, go inside” rule, which now makes no sense.
There are also unspoken rules and assumptions that humans absorb without even realizing it. Like how close can you stand next to someone before it feels awkward? Humans intuitively know the answer but may not easily be able to explain the exact rules. Those implicit social norms can be especially tricky for computers to pick up on just from data.
So, for now, common sense remains one of AI’s biggest weaknesses compared to human intelligence. It comes naturally to people but not so much to machines.
How Computers Can Learn Common Sense
After early optimism in the 1970s and 1980s, researchers realized how difficult teaching computers common sense would be. However, new approaches show promise in training AI systems to have basic common sense about the everyday physical and social world.
One approach is to build extensive knowledge bases by hand, detailing facts and rules about how the world works. The Cyc project, started in 1984 by Doug Lenat, represents one ambitious effort of this kind.
Hundreds of logicians have encoded millions of logical axioms into Cyc over decades. While time-consuming, the result is a system with considerable real-world knowledge. Cyc can apparently reason that a tomato is technically a fruit yet shouldn’t go in a fruit salad, thanks to its knowledge of culinary flavor profiles.
Crowdsourcing Common Sense With ConceptNet
More modern knowledge bases like ConceptNet take a crowdsourcing approach to generate common sense assertions. The idea is that instead of having experts or AI try to come up with all the basic facts and relationships in the world, they open it up so anyone can contribute snippets of common sense.
This crowdsourcing approach allows these knowledge bases to tap into the collective intelligence of many diverse people across the internet. By accumulating thousands and thousands of these little common sense nuggets from the crowd, ConceptNet built up some surprisingly large repositories of basic, everyday knowledge. And because new contributors are always adding to it, the knowledge keeps growing.
Teaching Common Sense Through Experience
Another promising approach is to build detailed simulated worlds where AI agents can experiment and learn about physics and intuitions through experience.
Researchers are creating 3D virtual environments filled with everyday objects that mimic the real world, like the digital home “AI2 THOR” built by the Allen Institute. Within these spaces, AI robots can try out all kinds of interactions to develop an intuitive understanding of concepts humans take for granted.
For example, an AI bot can be given a virtual body and try picking up blocks, stacking them, knocking them over, etc. By seeing the blocks fall and collide realistically, the bot learns basic notions about solidity, gravity, and physical dynamics. No rules are needed—just experience.
The bot can also try actions like dropping a glass object and seeing it shatter when it hits the ground. Or it can experiment with the properties of water by pouring liquids and observing how they flow and pool. These hands-on lessons ground the AI’s knowledge in sensory experience and not just data patterns.
Data-driven techniques like pretraining powerful large language models have also proven surprisingly effective at picking up common sense patterns. AI models like GPT-3.5 and GPT-4 can generate impressively human-like text after “reading” vast amounts of Internet data.
While they sometimes make unwise suggestions (otherwise known as AI hallucination), the statistical learning approach allows them to mimic certain kinds of common sense. However, there remains disagreement on whether this constitutes common sense or a clever exploitation of biases in the data.
How to Test Computers for Common Sense
As artificial intelligence systems take on more complex real-world tasks, evaluating whether they have “common sense” becomes crucial.
Physical Common Sense
One area to test is physical common-sense—intuition about objects, forces, and basic properties of the world.
For example, show a computer vision system a photo with a book hovering in mid-air and ask it to describe the scene. Does it note anything unusual about the floating book? Or feed the AI system unusual scenarios like “the man sliced a stone with a loaf of bread” and check if it flags those as improbable.
The Allen Institute’s AI2 THOR environment simulates block towers, spilled mugs, and other scenes to test these physical intuitions.
Social Common Sense
Humans also have social common sense—an implicit understanding of people’s motivations, relationships, and norms. To evaluate this in AI, pose situations with ambiguous pronouns or motivations and see if the system interprets them reasonably.
For example, I asked ChatGPT if “it” was referring to the suitcase or the trophy in the prompt below:
The trophy could not fit into the suitcase because it was too small.
It failed the test; meanwhile, a human would obviously know I was referring to the suitcase.
This kind of test is called the Winograd Schema Challenge, specifically targeting social common sense.
Safety and Ethics
Testing whether AI systems have learned unsafe or unethical patterns is critical. Analyze if the AI exhibits harmful biases based on gender, race, or other attributes when making judgments.
Check if it makes reasonable ethical distinctions. Killing a bear to save a child may be considered justified while detonating a nuclear bomb for the same purpose would not. Flag any recommendations for clearly unethical acts.
Real-World Performance
Evaluate common sense by observing how AI systems function in real-world settings. For example, do self-driving cars correctly identify and respond to objects and pedestrians? Can a robot move through varied home environments without breaking valuable items or harming pets?
Real-world tests reveal gaps in common sense that may not appear in limited lab conditions.
Progress Made, But Work Remains on Common Sense AI
Some experts argue AI may never reach human common sense without developing brain structures and bodies like ours. On the flip side, digital minds aren’t limited by human biases and mental shortcuts, so theoretically, they could surpass us! Though we probably don’t need to worry about super-intelligent AI just yet.
In the near term, the best bet is AI that combines learned common sense with some good old-fashioned programming. That way, dumb mistakes like mistaking a turtle for a rifle can hopefully be avoided.
We’re not there yet, but common sense is no longer AI’s dark matter–progress is happening! Still, a healthy dose of human common sense will be needed in applying these technologies for some time.