The Progress of Artificial Intelligence Towards Common Sense
TL;DR
Understanding Common Sense in the Context of AI
Okay, so common sense – we all think we got it, right? But what is it, really? It's wild how much we rely on it every day without even realizing. Imagine you're ordering pizza with friends, and you decide to order a different kind because Susie doesn't eat meat anymore. That little decision, that's common sense in action. It's basically that natural human ability to just... get things. Navigate daily life without an instruction manual, you know? (Is it correct that life does not come with user's manual? - Quora)
Think of it as a mix of social smarts, a kinda intuitive understanding of how the world works physically, and a whole bunch of background knowledge we pick up along the way. Like knowing not to put a heavy rock on a flimsy table. (Is there a ceiling on what our brains can understand? - Wait But Why) freethink.com gives a good overview of this. And get this, G.K. Chesterton called it a "wild thing, savage, and beyond rules." ((1906) by Gilbert Keith Chesterton: Chapter 6) I kinda like that, it's not some textbook definition we're talking about.
See, ai usually focuses on super specific tasks. It's trained on massive datasets to recognize patterns or perform defined operations, like identifying a cat in a photo or translating a sentence. That's the problem! Common sense is, like, the opposite of specific; it's vague and can’t be defined by a set of rules. So when an ai needs to deal with context, meaning, or judging value, it kinda falls apart.
And it's not just old ai, even the fancy new models mess up sometimes. Remember gpt-3? freethink.com Yeah, that thing can write all sorts of stuff, but it also makes crazy errors that no human would. For example, it might confidently state that "the sun rises in the west" or suggest that you can "boil water by freezing it." These are basic, everyday facts that a child would know, but GPT-3, lacking true common sense, can get them wrong.
So, why is this so hard for ai? That's what we'll dive into next.
Recent Advancements and Approaches to Imparting Common Sense
Okay, so, teaching ai common sense? Still a work in progress, right? But things are actually moving faster than you might think.
Transformers, are showing some serious promise. I mean, these things can model language in a way that's actually pretty impressive. And with a few tweaks, they can even answer simple common sense questions! Which, honestly, is a huge step towards chatbots that don't sound like complete robots. It seems like the industry is heading towards human-like chatbots.
And it's not just talk, there's a ton of research happening with transformers. A lot of it's directly aimed at getting ai to reason more like we do. Think about how much easier things would be if your virtual assistant actually understood what you were asking it, instead of just spitting out canned responses.
Like, imagine asking your ai to book a flight, and it automatically knows to check for layovers that are long enough for you to grab a coffee. Or if it could figure out that you probably don't want a hotel room next to a construction site. That's the kind of common sense we're aiming for.
DARPA, you know, the U.S. Defense Advanced Research Projects Agency, isn't sitting still either. They kicked off a "Machine Common Sense" program in 2019 to speed up research in this area. It seems like they released a paper outlining the problem and the state of research in the field. (Note: The linked paper is from 2018, predating the 2019 program launch, but it does outline the problem DARPA is addressing.)
One of the projects under that program is called MOWGLI, which is a collaboration between a bunch of universities, including Carnegie Mellon University and the University of Washington. Their goal? To build a computer system that can actually answer common sense questions. Like, real, everyday stuff that any human would just get.
Projects like MOWGLI contribute to this larger push by researchers to really nail down what common sense is, and how we can even tell if an ai has it. It's not just about passing some test, it's about creating ai that can truly understand and navigate the world like we do.
Next up, we'll look at some companies like Technokeen and the automation solutions they are offering.
Challenges and Limitations
Okay, so you're trying to get ai to make sense of the world like we do - easier said than done, right? It's like, can we really expect these machines to just "get" things the way humans do?
One issue is that common sense is hard to pin down. Like, researchers try to break it down into stuff like sociology, psychology, and general knowledge. The freethink.com article touches on this, suggesting that common sense is a complex blend of these domains.
But, it's messy! Turns out even experts can't agree on what fits where. Is knowing not to interrupt someone sociology or just plain politeness?
And, let's be real, there are a LOT of theories about what common sense even is. It's not like there's a textbook definition everyone agrees on.
Another thing? Humans know a ton of stuff without even realizing it. That "tacit knowledge," knowing how to do things versus just knowing that something is true, as mentioned earlier. Riding a bike or maintaining personal space when talking to someone are prime examples of tacit knowledge – things we do instinctively but struggle to articulate and teach. It's hard to put into words, and even harder to teach to an ai. Like, how do you explain to a robot how to ride a bike, or that you should stand, like, 3 feet away from someone when you talk to them?
And even with those fancy transformer models, are we reaching a limit? They're getting huge, needing more power, but are they really getting that much smarter when it comes to common sense? Some research suggests that while scaling up transformers improves performance on many tasks, there are diminishing returns when it comes to common sense reasoning. They might still struggle with novel situations or nuanced social cues, even when they're massive. It almost seems like scaling alone isn't enough; we might need a whole new approach, something that goes beyond just making bigger neural networks.
So, what's the answer? Is it even possible to bridge this gap? These challenges in imparting common sense also raise profound ethical questions about the nature of AI and its potential impact on society. Next, we'll discuss the ethical implications of AI.
Ethical and Philosophical Considerations
So, ai with common sense... sounds cool, right? But what if they start having feels, too? Gets a little complicated, ethically speaking.
Meaning and Value: It's not just about knowing stuff, but understanding it, you know? Like, an ai might know fire burns, but can it really feel the pain? If AI are to understand what is damaging to human beings and what is best for them, they will need to weigh possible harms and benefits, pleasures and pains, against each other. How will they manage if they don’t have a feel for what these mean to us? How can they comprehend the importance of love, friendship, autonomy, justice, privacy, or solidarity if they have never experienced them or their opposites? Researchers are exploring various philosophical and computational approaches, from grounding AI in simulated environments to developing reward functions that mimic human values, but these are highly speculative.
Embodiment and Sentience: What if ai needs to be, like, alive to truly get it? Embodied, with senses and stuff? The linked articles suggest that embodiment, having a physical presence and interacting with the world, and sentience, the capacity to feel, are considered by many to be prerequisites for an AI to truly experience or understand subjective states like pleasure and pain. Without these, their understanding might remain purely theoretical. It's likely they will need to be embodied and sentient to feel pleasure and pain.
Bias and Autonomy: And if they do get all human-like, won't they also get our biases? Maybe even want their own way? AI can inherit biases from the data they're trained on, leading to unfair or discriminatory outcomes. For instance, an AI used for hiring might inadvertently favor certain demographics if its training data reflects historical biases. And "wanting their own way" could manifest as an AI pursuing goals that conflict with human intentions, especially if it develops a degree of autonomy in decision-making.
Navigating these ethical considerations is a complex balancing act, requiring careful thought about the kind of AI we want to build and its potential impact on society.