Yesterday, I conducted an extensive interview with Professor Emeritus C.C. Kelly Gotlieb, (Wikipedia) "Father of Computing in Canada", Department of Computer Science, University of Toronto. Have a watch of the extensive 30 minutes long interview.
KL: Kempton Lam
KG: Professor Emeritus C.C. Kelly Gotlieb
Table of content (with time codes):
0:00 KL: Introducing Professor Emeritus C.C. (Kelly) Gotlieb, "Father of Computing in Canada", University of Toronto 0:29 KL: My question about Google Driverless Cars. Three US states already has law permitting testing of Google Driverless Cars. Talking about California governor signed the bill, "SB-1298 Vehicles: autonomous vehicles: safety and performance requirements" into law. 2:07 KL: Bill SB-1298 allows Google to test the Google Driverless Car provided Google pays a $5 million insurance, and provided there is a driver in the car. 2:21 KG: "That's what I expected." 2:35 KL: My concerns were concerns raised by Kelly in an earlier speech of his. 2:47 KG: listing some of the concerns he has with concepts like Google Driverless Cars. "United States is a very litigious society." 3:12 KG: Google Driverless Car gets into an accident, whose to blame? And who can you sue? The person who wrote the program? Google who authorize the car? Car manufacture? The person who is in the car? Or all of the above? […] Lots of questions to be asked when failure happen. 4:20 KL: One of the co-founder gave the example of Google Driverless Car is going to be so good that it will be able to drive a blind person from point to A to B! 4:46 KG's comment 5:17 It is the ability to override by human. 5:23 KG: Allowing human to override the robot. 5:31 KL gives the example of highly sophisticated aircraft auto-pilot systems in the last few decades which are "better" in controlling for fuel, flight time, etc than human pilots but we have yet to see passenger carrying planes without human pilots. 6:08 KG gives the examples of military drones but point out there aren't any pilotless planes carrying passengers. KG talks about robots that do surgery and do diagnostic but they always have human expert as always have a doctor and they always allow the doctor to override the program if the doctor thinks the program is making mistakes. 7:04 KL proudly points out NeuroArm (made jointly by U. of Calgary and MDA) 7:24 KG: So there is always a person. A person has the ultimate responsibility. 7:33 KL talks about the example of driverless train vs driverless car. 8:00 KG explains there is nevertheless there is a human presence that can override machines. (very important points) 9:05 KL asks: Google co-founders will come back with their statistics supporting how much safer the Google Driverless Cars will be than average driver! 9:22 KG: They do say that and it is probably true. KG will concede the Google Driverless Cars are safer in most conditions. KG will NOT concede the Google Driverless Cars are safer in ALL conditions. 9:36 KL: It is the ALL, that we are both deeply worrying about! 9:42 KG: Driverless cars are, besides safer, more economical because of the shorter driving distance between cars. 10:08 KG: "Driverless Cars are good and I don't want to see them prohibited. But I would not want them to be the controlling systems of last resort." 10:20 KL: So as long as there is a human override, you are willing to let them drive on the road anywhere? 10:30 KG gives the example of the pilot landing a plane on the Hudson river. (Captain Chesley B. "Sully" Sullenberger landing US Airways Flight 1549 on the Hudson River) The captain exercised marvellous control and saved everyone on board. Don't think we have a robot as good as that pilot right now. "I'm not willing to accept a robot is better than this man [captain Sully]." 11:23 KL: So am I understanding you correctly that you are willing to let the Google Driverless Cars roam free as long as there are humans in the cars to be able to override special situations? [KG: exactly!] 11:55 KL: Don't put a blind driver in the cars where he/she has no ability to see or override? 12:05 KG: They are great inventions and I don't want to see them banned by any means. "Under extreme conditions, I would trust a skilled person over a robot controlled by a program." 12:27 KL: With a human in the car, it will be easier to deal with because the laws don't need to be changed? 12:42 It is possible that human override would be worst than the robot. "But I would take my chances on a human with long experiences." 13:03 KL: So you are not saying humans will always be better? 13:10 KG: "I would say if you got an experienced pilot, an experienced driver, who has been in many difficult situations, I would rather have him in extreme crisis than any program." 13:37 KG: "If their judgement is that they can do a better job than the robot, I will put my life on the line for that. 13:55 KL ask KG to talk about Google Glasses, and talk about the privacy implications of having such a "big brother" device so close to us. The surveillance vs privacy discussion. 21:42 KL & KG general chat including KG's experiences working with Alan Turing.