In what ways do self-driving cars create moral problems that computers are not prepared to address? Do you agree with Carr that “it’s impossible to automate complex human activities without also automating moral choices" (186)? Why? If automated cars can reduce car crashes, as automated flight has reduced plane crashes, how does that affect the moral implications of driverless cars?
While reading the scenario in the beginning of this chapter I knew that I would have swerved to avoid the animal or child, it would just have been instinct. It honestly made me sick to imagine a child being ran over by a self driving car because it found that that was a "safer" option. This is a problem that computers are not prepared to address now, nor do I think they ever will be able to. I do agree with Carr, and think that the more activities we trust computers with, the more trust we put into the machine's morality which will never be close to human's because they don't have a brain, heart, and conscious that all work together. Although plane crashes have been reduced by automated flights, I think that cars are smaller and closer to home which makes it harder to give up moral standing to a machine.
ReplyDeleteI agree with Kate. While having a car that drives itself would be nice, I do not think that it would make good decisions most of the time. For example, imagine driving down the road and there is a child in the road trying to catch the dog. There is no way to avoid both the dog and the child. I would hope that in that situation, a person would swerve to miss the child and hit the dog. However, if a car was self-driving, it might see the child as the safer option for you. I don’t think that self-driving cars are a good idea because of this. I don’t believe that they will ever have enough technology to mimic a person’s morality close enough.
DeleteThis scenario reminds me of our class discussion about pushing the man over the cliff to save someone. He would die, but more would be saved. But you would have to kill him. But you save a couple and only lose one. If we, humans in an EQ1 class cannot decide which is the better option, how are we to program a computer to make this decision? I believe that if self driving cars do come to exist programmers would have to be ready for every option, and give the car a set series of tasks to do in every scenario. This would be a difficult undertaking, but in what other way would a computer make a decision? We have learning computers but I do not want to trust my life on the road with a learning computer that has a brain that is not yet ready for driving.
ReplyDeleteIn the beginning of Chapter 8, Carr discusses two major examples that would involve the need for a self driving car to make moral choices. One for instance, is if the car spotted a dog standing in the middle of the street, and the other is if a child gets knocked into the road (184). Both of these scenarios would require a moral compass. Would the car be able to make the right moral decision or would it make the safest decision? Having to even question that is scary enough. The only way that a self driving car would be able to function properly is with the automation of moral choices. But how accurate could it get? Is there really a way to replicate the way a human mind processes the world around them and make decisions accordingly? I would agree with Carr that it is “impossible to automate complex human activities without also automating moral choices” (186). But is it even possible at all? While a driverless car may be able to reduce the amount of car crashes, at what cost would that come? Would the car choose to hit the dog/child just to reduce the risk of a crash? Would it choose the “safest” option and risk the life of another? Is a computer really capable of ever making choices where lives are at stake? Everyday, humans are faced with decisions that require morals. What is the point of trying to teach a computer morals when most of us already have them and are capable of making these hard decisions. Just like the idea of a crewless flight, the idea of a driverless car seems ominous and completely unnecessary.
ReplyDeleteSince I have personally swerved to avoid hitting a frog, I can reasonably say that I would swerve to hit either the child or the animal. If a self driving car found that it was a “safer” option, I would never, ever want to use nor go close to that car. It reminds me like of the discussion in EQ1 where we had to choose between killing one guy to save many and vice versa. If a classroom full of students couldn’t decide what was right, how could a car that had no driver? It would see the facts, but not consider the implications. The car might see that it was considered safer to keep going, while the human considers the fact that more than one person will be hurt.
ReplyDeleteWhat if the manufacturers would allow drivers to set personal preferences to the cars driving? For example, you could set a preference to where you would choose to swerve and miss the frog, endangering your own life but saving the frog. Would you reconsider your answer? Or do you not trust technology enough to perform critical tasks? Some might say it would be better for an emotionless car to make the decision of hitting a frog rather than the panicked driver who is not in the right state of mind. With a self driving car, the driving would be as perfect as possible. With a human driver, the driving may be more dangerous for everyone involved.
DeleteThe two examples that Carr provides us with a leaves me with a thought that lingers in my mind. I could not imagine not swerving for a child or pet. The idea of a car with no feeling or empathy deciding it would be safer not to swerve is appalling to me. Even if we could implant moral choices into a car, it would never be the same as human thought process and problem solving. Why would we even want to try? It would be a waste of time since we could make such a decision ourselves.
ReplyDeleteThe scenario I imagine is not the car hitting a child, because I believe programmers would address that, but the situation of the car trying to avoid hitting a frog or squirrel and instead causing an accident. The problem is not human drivers but cars. We have taken away the dynamics of driving that keep us engaged in the world around us. Carr's experience of, "the pleasures of having less to do were real, but they faded. A new emotion set in: boredom. I didn't admit it to anyone, hardly to myself even, but I began to miss the gear stick and clutch... the automatic make me feel a little less like a driver and a little more like a passenger" (5,6). We have made driving boring which then makes us seek things to occupy our time and focus our mind on.
ReplyDeleteThe only good that would come out of self-driving cars and computer decided morals would be if morals were standardized. While I understand that sometimes there is no right decision, perhaps the computers could consistently choose the same side of right. Like the criminal justice system, punishments are standardized- to an extent. I think it would be wonderful if there was such a standardized system regarding morals, but I do not ever see it coming true due to religious and societal differences of every person.
ReplyDeleteAs I stated that the only good of self-driving cars would be a set of standardized morals, I really do not like the thought of self-driving cars at all. Someone I follow on Snapchat owns a self-driving Tesla. The vidoe he posted was not of him not driving, but of him sitting in the passenger seat. He wasn't in control at all. He didn't care to be. And that terrified me. He seemed so incredibly passive to his situtation and it scared me to think that he could be passive when his car chose between a child or an animal.
Self-driving cars may not be something of science fiction for too much longer. It is understood that self driving cars could possibly reduce crashes and accident by eliminating the human factor, however eliminating the human factor may also be apart of the problem. With automation lacking the basic morals and emotions that humans posses, it would be impossible for a machine to make a split-second moral decision based on the situation. A driver may notice and recognize the difference between a box and a child in the road, but an automated system would recognize that an object is in the road. Automated systems have no way to differentiate between an inanimate object and person or thing with some type of deeper meaning. Since automated systems have no way to differentiate human life and a cardboard box in the middle of the road, an automated wouldn't be able to make morally-based decisions while driving, posing a serious issue for road safety. Until machines are able to understand morals and emotions, we will have yet to see automation take over certain tasks.
ReplyDelete