21st edition of the 6th year of SmartDrivingCars  Uber Finds Deadly Accident Likely Caused By Software Set to Ignore Objects On Road 

A. Efrati, May 7, “Uber has determined that the likely cause of a fatal collision involving one of its prototype self-driving cars in Arizona in March was a problem with the software that decides how the car should react to objects it detects, according to two people briefed about the matter.” Read more  Hmmmm….Uber is “leaking” this???  Is this Spin?  Fake News??   I guess Uber doesn’t believe in transparency here.  Where is the official public statement of reassurance??? 

“The car’s sensors detected the pedestrian, who was crossing the street with a bicycle, Hmmmm….Pretty much what I wrote on March 24, the sensors “Saw something” …   but Uber’s software decided it didn’t need to react right away. …”right away” is Fake News.  It never reacted.  Uber has not released any data indicating that the software ever reacted.  “That’s a result of how the software was tuned.” …That was a major “tuning” faux pas.  What is being divulged here is that Uber’s software never became confident enough that what it was seeing was something that it should not hit and, at least,  begin to apply the brakes (or swerve, or ???).  Even the driver in the video recognized that the object should not be hit a split second before the crash.  So the Problem     is not “tuning” it is outright fuhgeddaboudit”  Like other autonomous vehicle systems, Uber’s software has the ability to ignore “false positives,” or objects in its path that wouldn’t actually be a problem for the vehicle, such as a plastic bag floating over a road…. Is Uber suggesting that its software can’t tell the difference between a plastic bag floating over the road and a pedestrian with a bicycle, even after seeing the object 30 to 60 or more times over the 3 or more seconds that the object was in view?    If this isn’t Fake News then Uber is hopelessly far behind…   In this case, Uber executives believe the company’s system was tuned so that it reacted less to such objects.”  It didn’t react at all!… But the tuning went too far, and the car didn’t react fast enough, one of these people said…. … It didn’t react at all! If this wasn’t so important I’d put it in C’mon man.

“False positives” are the symptom, not the problem.  The problem is Uber’s system design and operational policy.  Uber system designers knew that the sensors under certain conditions reported “false positives” (were “spooked”).  One of those conditions was possibly  the combination of “is the closing speed = car’s current speed” AND “is the car’s current speed greater than 30mph.”  In situations in which both are true, then Uber’s “tuning”  is outright fuhgeddaboudit“. This “tuning” effectively turns-off Uber’s sensors to detecting anything that is stationary or moving across its lane ahead. If Uber has understood this, then Uber would/should have …

1.  limited the operation of its cars to speeds under 30 mph,

2.  limited the operation of its cars at speeds greater than 30 mph only to roadways where pedestrians are extremely unlikely to cross, and

3.  focus on substantially improving its ability to interpret its sensor data so that the false alarm rate becomes so small that false alarms are tolerated throughout Uber’s operational domain.

…”Meanwhile, the human driver behind the wheel, who is meant to take over and prevent an accident, wasn’t paying attention in the seconds before the car hit…”  …I think that this is a cheap shot against the driver.  I suspect that this car had a screen that displayed the real-time status of the automated driving system.  I would not be surprised if that screen was mounted below the radio and that the driver was actually monitoring the operation of the automated driving system prior to the crash.  Why this display wasn’t on the dash so that the driver’s peripheral vision could remain on the road ahead when the driver was monitoring the performance of the system is a question Uber should answer,…  if it had any interest in being transparent.

Another question that Uber could be asked: Why didn’t the monitoring system warn the driver that it was “seeing something”  and ask the driver to look to see if it should be “saying/doing something”.

Since it doesn’t look like Uber is going to really divulge anything, it is incumbent on the NTSB to dig deeply into this “false alarm” issue.  Disregarding “false positives” in order to circumvent a little passenger/customer discomfort enables “false negatives” which kill people.  Not pretty! 

“…Uber has reached its own preliminary conclusion…”  .The problem was what the broader system chose to do with that information”. .… Is Uber going to tell us????  This is way more than a “tuning problem”.  This is a design and culture problem…       

“…In the collision investigation, Uber found that a vital piece of the self-driving car was likely working properly: the “perception” software, which combines data from the car’s cameras, lidar and radars to recognize and “label” objects around it. In this case, the software is believed to have seen the objects. The problem was what the broader system chose to do with that information…”  …….NO!!!!  The problem is in the “recognize & label”.  If it didn’t miss-recognize and miss-label then the ride wouldn’t be jerky.  The “perception” software is so intent on “seeing something” in certain domains that it ends up “imagining that it saw something that wasn’t there” (false positive) so the broader system  turns off the perception system in those domains.  It is the “vital” “perception” system that is at fault and needs the work. 

I suspect that this mess will be discussed at the   imap:<a href=//”>  2nd Annual Princeton SmartDrivingCar Summit  imap:<a href=//”>   Uber isn’t the only company with a “false alarm” issue.   Alain

imap:// Driving Cars Podcast Episode 38

F. Fishkin, May 10, “The continuing Uber crash investigation, Waymo and Ohio rolls out the welcome mat for the testing of self driving cars. All that and more in Episode 38 of the Smart Driving Cars podcast. This week Princeton’s Alain Kornhauser and co-host Fred Fishkin are joined by Bryant Walker Smith of the University of South Carolina and Stanford. Tune in and subscribe!”

Hmmmm…. Now you can just say “Alexa, play the Smart Driving Cars podcast!” .  Ditto with Siri, and GooglePlay.  Alain

Real information every week.  Lively discussions with the people who are shaping the future of SmartDrivingCars.  Want to become a sustaining sponsor and help us grow the SmartDrivingCars newsletter and podcast? Contact Alain Kornhauser at!  Alain

imap:<a href=2nd Annual Princeton SmartDrivingCar Summit  imap:<a href=

May 16 & 17, 2018

Registration NOW OPEN

Become a Sponsor and Promote your Wares

cid:<a href= ‘Wild West’ Ohio Beckons Self-Driving Cars Even After Uber Death

C. Trudell. May 9, “John Kasich says he wants to make Ohio the “wild, wild west” for self-driving car testing, regardless of the recent fatal crash involving Uber Technologies Inc.

The Republican governor called the March 18 Uber incident “terrible,” but is plowing ahead anyway. The executive order he signed today allows companies to test cars on any public road in the state, including without anyone behind the wheel. A licensed driver will have to be monitoring the car remotely and have the ability to avoid accidents if the car’s system fails, according to the order….” Read more  Hmmmm…Read the executive order.  This is actually a BIG deal, considering the recent Uber and Waymo crashes.  Even though he is “term-limited lame duck” this is a major step for Gov. Kasich and Ohio.  While there isn’t legislative “teeth” in executive orders, it is an enormous “Welcome Mat” that Ohio has placed throughout the State.  Boy, do I wish a similar Executive Order would come from the desk of our newly elected governor.  Alain

cid:<a href=Waymo’s self-driving van not at fault, in manual mode during crash, police say

 B. Raven, May 7, “…The video shows a silver Honda run a red light, avoid striking a red car entering the intersection, jump the medium and crash into Waymo’s Chrysler Pacifica minivan. The Chandler Police Department reports in a Friday news release that Waymo’s Pacifica was in manual mode and not at fault in the crash.

Police say that the driver of the Pacifica “sustained injuries which required hospitalization,” and that the driver of the Honda was cited for a red light violation…”  Read more  Hmmmm… See video.  This is a VERY important “corner case” because we (If Waymo becomes “Transparent wrt Safety”) at least have some idea as to how an alert human driver reacted/behaved to this very difficult situation.  I’m certain that everyone, including Waymo, would like to see how their automated driving system would behave in this “corner case”.  This is a situation where, on the surface, it seems as if there is very little that one could do.  

While the car was in manual mode, I suspect that the sensors were all working and that their data prior to the crash was captured.  This is very valuable data that Waymo should openly share with everyone for the purpose of enabling everyone to test how their system would have responded to this “corner case”. 

One of the major discussion topics at next week’s Princeton Summit is likely to be this concept of having everyone in this industry openly share safety related data and information so as to allow everyone to become safer faster.  One of the key individuals who will weigh in on this topic will be Voyage CEO, Oliver Cameron.  Come join in the conversation.   Alain

cid:<a href=How do you define “safe driving” in terms a machine can understand?

May 10, “WHEN people learn to drive, they subconsciously absorb what are colloquially known as the “rules of the road”. When is it safe to go around a double-parked vehicle? When pulling out of a side street into traffic, what is the smallest gap you should try to fit into, and how much should oncoming traffic be expected to brake? The rules, of course, are no such thing: they are ambiguous, open to interpretation and rely heavily on common sense. The rules can be broken in an emergency, or to avoid an accident. As a result, when accidents happen, it is not always clear who is at fault.

All this poses a big problem for people building autonomous vehicles (AVs). They want such vehicles to be able to share the roads smoothly with human drivers and to behave in predictable ways. Above all they want everyone to be safe. That means formalising the rules of the road in a precise way that machines can understand. The problem, says Karl Iagnemma of nuTonomy, an AV firm that was spun out of the Massachusetts Institute of Technology, is that every company is doing this in a different way. That is why some in the industry think the time has come to devise a standardised set of rules for how AVs should behave in different situations….

…The wider point, though, is that even if it turns out to be possible to build AVs governed by mathematically rigorous rules of the road, the industry’s progress would still be subject to the vagaries of human nature.”  Read more   Hmmmm…  Nice that The Economist has entered into the discussion.  The fact that humans are in the loop makes this is a most challenging and is some sense philosophical issue. Mathematical rigor loses much of its luster when it has to confront Mother Nature.  Alain

 cid:<a href=Nvidia CEO says self-driving pause, Tesla Model 3 issues did not affect auto business

Nvidia Corp. is still waiting to make big bucks from self-driving cars, but Chief Executive Jensen Huang said Thursday that it is not being held back by a pause in self-driving tests or Tesla Inc.’s slow production ramp for the Model 3.

In a short interview Thursday with MarketWatch after the chip maker released strong earnings results, Huang said that a recent pause in self-driving tests on public roads did not impact Nvidia’s NVDA, +1.70%  automotive business, and that cars should be back out in public “pretty soon.” ..

“We are currently only testing in private roads, private tracks, and in our simulators,” Huang told MarketWatch in a brief interview Thursday afternoon. “We took a pause so we could make sure we learned everything we could from the recent incident and I think [Uber’s] public statements are pretty clear, so we’ve taken a pause and we’ll resume testing here pretty soon.”…”  Read more  Hmmmm..  Prudent.  Alain

cid:<a href=Tragic fatal crash in Tesla Model S goes national because – Tesla

F. Lambert, May 9, “Based on annual statistics, about 500 gas-powered cars caught on fire yesterday and ~100 people died on US roads, but only one car crash is making the national news today: a tragic fatal accident in a Tesla Model S….A Tesla representative commented in a statement:

“We have not yet been able to learn the vehicle identification number, which has prevented us from determining whether there is any log data. However, had Autopilot been engaged it would have limited the vehicle’s speed to 35 mph or less on this street, which is inconsistent with eyewitness statements and the damage to the vehicle,”…Read more  Hmmmm..  All so fortunate.  What it may show is that we need to have Safe-driving technology that can’t be turned off.   Excessive speeding should no longer be tolerated!  Alain

imap:<a href=After Fatal Uber Crash, a Self-Driving Start-Up Moves Forward

C. Metz, May 7, “… While other companies have tested self-driving cars for years and some are in the early stages of offering a taxi service,’s autonomous vehicle debut on Monday was still notable. It was the first new rollout of autonomous cars in the United States since a pedestrian died in Arizona in March after a self-driving car operated by Uber hit her…” Read more  Hmmmm…. Nice to have another entrant, but this is still very much “Self-driving” and not “Driverless”.  I wonder how they address “false positives”.

cid:<a href=Waymo’s self-driving car service launches in Phoenix this year

A. Krok, May 8, “…At this week’s Google I/O conference, Waymo CEO John Krafcik said that his company’s self-driving ride-hailing service will launch in earnest this year in Phoenix. …

Right now, Waymo’s pilot program has ordinary Phoenicians hailing rides in an autonomous (and now also driverless) Chrysler Pacifica. The rides are limited to a geofenced area, and they’re offered free of charge, … Not much appears to change in the shift to the proper service, except for one thing: money. Waymo confirmed to Roadshow that the service will be paid and open to the public, which means the days of free van rides around Phoenix are over Read more  Hmmmm…. Not much new here.  Alain


A. Hawkins, May 9, “Right now, a minivan with no one behind the steering wheel is driving through a suburb of Phoenix, Arizona. And while that may seem alarming, the company that built the “brain” powering the car’s autonomy wants to assure you that it’s totally safe. Waymo, the self-driving unit of Alphabet, is the only company in the world to have fully driverless vehicles on public roads today. ”  Hmmmm…. True! (although “fully” is redundant)  ” That was made possible by a sophisticated set of neural networks powered by machine learning about which very is little is known — until now. …”     Read more  Hmmmm…. This last sentence, Maybe “not so much”.  This is a nice article, but much of it may be window dressing, especially the extent of DeepLearning, to support of “…The sudden interest by Waymo in burnishing its AI credentials…” .  AI and especially DeepLearning has a nasty “false positive” problem.  Until false positives can be reduced to rarities, it can’t be relied upon in “mission critical” situations. And Waymo knows that (or they should.) Alain





Calendar of Upcoming Events:

imap:<a href=

2nd Annual Princeton SmartDrivingCar Summit
May 16 & 17, 2018
Princeton University
Princeton, NJ

Registration NOW OPEN

Become a Sponsor and Promote your Wares



  On the More Technical Side

Capsule Networks (CapsNets) – Tutorial