I've just been reading some complaints about self-parking Nissan Qashqais. Apparently they can't safely park themselves in, for instance, an underground car park with concrete pillars in the way, and the drivers in question felt that this was unfair, unreasonable, and unexpected. Why can't the car defend itself from concrete? Doesn't it know that concrete is harder than it is and will damage it?
I responded thus...
You do realise that you've just identified the principle general problem set that has had all the planet's best minds in the AI field working as hard as they can for about 50 years now without any significant progress, right?
That since Asimov wrote "Robbie" in 1940 -- the first story in I, Robot -- he invalidated and obsoleted all previous robot fiction, stopped and redefined the field. His "3 Laws" ended the "killer robot" story and made them about what having robots would do to humanity.
Asimov, writing before digital electronic computers had been invented, assumed that the stuff about seeing, walking, listening and understanding and so on would be easy. After all, a 2YO human can do them, self-taught. A mouse, even a cockroach, can run around and avoid hazards.
Speech would be hard. Stuff like chess would be really hard.
He was, of course, totally wrong. Chess is almost trivial. Talking isn't particularly difficult. Crappy early-1980s 8-bit computers could do that.
Walking is difficult. Seeing and knowing what it's seeing is very difficult. Telling the difference between a bollard and a child is extremely hard, and as we all know, even the best and most complex software regularly fails and screws up.
We've solved speech recognition, badly, for some languages, in limited domains, by brute-forcing it, and it still doesn't work very well. It's getting there, though.
What you're asking is a general-purpose artificial intelligence, capable of sophisticated discrimination and value judgement, and that is the hardest thing there is. It's been a couple of decades away for longer than we've been putting things into space, and it still is.
So, no, self-driving cars can't do that. And they won't, not until some time after they're on the market, if then. And a *lot* of people and other animals on the roads will die before they do.
Like the credit card companies accept that their security is a bit shite and that necessarily they will lose tens of billions per year to fraud (US$20 Bn per annum!) but it's so profitable that they tolerate that as a cost of operating, motor cars are the most dangerous form of transport ever invented, but we tolerate it because it's so damned convenient.
It kills 3¼ thousand per day, 1.3 million per year.
That's good going for modern international war, but we ignore it. It's normal.
So, soon, very stupid robots will be killing thousands a day, but if it makes cars easier and more convenient, we'll put up with it, and pay good money for the privilege.
We should have wrapped Asimov in copper wire before we buried him. We'd be getting a few kiloWatts off the old sod by now.
I responded thus...
You do realise that you've just identified the principle general problem set that has had all the planet's best minds in the AI field working as hard as they can for about 50 years now without any significant progress, right?
That since Asimov wrote "Robbie" in 1940 -- the first story in I, Robot -- he invalidated and obsoleted all previous robot fiction, stopped and redefined the field. His "3 Laws" ended the "killer robot" story and made them about what having robots would do to humanity.
Asimov, writing before digital electronic computers had been invented, assumed that the stuff about seeing, walking, listening and understanding and so on would be easy. After all, a 2YO human can do them, self-taught. A mouse, even a cockroach, can run around and avoid hazards.
Speech would be hard. Stuff like chess would be really hard.
He was, of course, totally wrong. Chess is almost trivial. Talking isn't particularly difficult. Crappy early-1980s 8-bit computers could do that.
Walking is difficult. Seeing and knowing what it's seeing is very difficult. Telling the difference between a bollard and a child is extremely hard, and as we all know, even the best and most complex software regularly fails and screws up.
We've solved speech recognition, badly, for some languages, in limited domains, by brute-forcing it, and it still doesn't work very well. It's getting there, though.
What you're asking is a general-purpose artificial intelligence, capable of sophisticated discrimination and value judgement, and that is the hardest thing there is. It's been a couple of decades away for longer than we've been putting things into space, and it still is.
So, no, self-driving cars can't do that. And they won't, not until some time after they're on the market, if then. And a *lot* of people and other animals on the roads will die before they do.
Like the credit card companies accept that their security is a bit shite and that necessarily they will lose tens of billions per year to fraud (US$20 Bn per annum!) but it's so profitable that they tolerate that as a cost of operating, motor cars are the most dangerous form of transport ever invented, but we tolerate it because it's so damned convenient.
It kills 3¼ thousand per day, 1.3 million per year.
That's good going for modern international war, but we ignore it. It's normal.
So, soon, very stupid robots will be killing thousands a day, but if it makes cars easier and more convenient, we'll put up with it, and pay good money for the privilege.
We should have wrapped Asimov in copper wire before we buried him. We'd be getting a few kiloWatts off the old sod by now.