Autonomous Cars + IoT, and Life or Death Decisions (By Aritra Das)

 

BlackBerry had its annual analyst event last week where its CEO John Chen explained something I thought was very provocative about the future the company anticipates.

It has to do with far broader interoperability, unique dependencies, and a level of potential security risk that most other technology firms aren't yet considering. What if our coming wave of autonomous vehicles can connect to any IoT device?

Let's explore some safety risks associated with autonomous vehicles and the internet of things this week. Then we'll close with my product of the week: a new tablet being released later this month that is Amazon's first real effort to build a Surface-like laptop.

Our Frightening Autonomous IoT Future

I could have titled this our fascinating, exciting, or unforgettable autonomous IoT future but, other than BlackBerry, no one seems to be focused on the security problems that this future will bring with it. We are creating an ecosystem where cars and robots are autonomous and connected so they will be able to connect to other devices, sensors, and data repositories available to them.

Companies like Intel, Nvidia, and Qualcomm have been talking about the massive amount of training, sensors, and AI technology going into cars and the C-V2X capability for these vehicles to communicate with each other and the smart city ecosystems they will be traveling.

Capabilities like using other car sensors to identify threats they can't yet see are standard in demonstrations. But what about all the cameras popping up around the car? Cars will have layers of sensors from LiDAR (light detection and ranging) to cameras (including infrared) that will be able to see better than you can in all kinds of weather situations; and they'll be fully trained on how to deal with anything they see.

But smart cities will have their cameras, sensors, and huge data lakes of information that cars could also use to see things out of their range.

For instance, if a child runs toward the street, the car cameras might not see them yet, but the security camera on the house, store, or office building might. Cars routed to pass down that street would, if they connected to that camera, get an alert that a hazard was about to enter the street that should be avoided. They might even get identifying information that would determine if that hazard is a child, dog, cat, or object in motion; and from that information the car could better anticipate and avoid hitting it.

Let's extend this to where street sensors could report ice or water on the road, which could pose a danger depending on how fast the car is going and what type of tires are on the vehicle. Gunshot sensors could warn cars away from dangerous areas. Historical crime data could also help guide vehicles around areas where carjackings and other forms of crime, particularly dangerous to passengers, make travel unacceptably dangerous.

Now imagine if some hostile actor was able to take control of any number of these sensors. They could force the car into unsafe areas, they could cause the car to slow, and rather than avoid a carjacker serve itself up to one. Instead of sending the car a safe route, they could provide it with a very unsafe direction.

BlackBerry's Solution

What John Chen seemed to be talking about was a future solution from BlackBerry that would establish a zero trust network for these devices and the autonomous vehicles, robots, drones, and other things that make use of the data.

This zero trust environment would provide a comprehensive way to secure IoT devices, secure the data coming from them (both in-transit and at rest) and aggressively protect the autonomous things using that data from being compromised by malware.

While we are likely at least five years from all of this coming together. The effort to secure this autonomous/IoT solution is arguably more important than the effort to create it. We are anticipating a substantial reduction in automobile-related deaths once autonomous driving becomes viable.

But suppose those vehicles and the data they use aren't secured. In that case, we could see an expansion of the Tesla crashes thus far caused by poor driver decisions to use a technology that isn't ready for autonomous driving.

The Danger of Mixed Messages

Right now, autonomous driving isn't that popular. Concerns about the technology indicate that adaption will lag optimums significantly over the next decade or so because people don't trust the tech. We saw during the pandemic that mixed messages from the CDC on masks, and concerns about blood clots, which are incredibly infrequent, are causing people to distrust the vaccines.

The significant number of drivers that have died in Teslas while allegedly using autopilot has grown substantially, and each crash seems to get front-page coverage. These crashes are mainly being caused by Tesla drivers who think they have an autopilot. Still, that capability won't arrive until we have Level 4 or 5 autonomous systems. Tesla cars currently range from Level 2 to 2+, and they lag Cadillac in terms of the system's performance while calling it Autopilot.

Since I was a kid, we've told the story of the first significant cruise control accident and how it was caused by someone who didn't know what cruise control was, renting an RV, and being told it was like an autopilot. So he set the cruise control, went back to make some coffee, and ended up in a massive crash. Elon Mush either never heard that story or has some weird desire to see how many people he can get to crash their cars by using the same method.

In much the same way that the CDC mixed messages and blood clot reports adversely impact the vaccine rollout, these crashes are scaring people away from autonomous cars before they are even ready.

The same as we need a critical mass of people vaccinated so we can get to herd immunity, we need a critical mass of people using autonomous cars; so we can reduce the 38,000 people who are killed, and the 4.4 million who are injured seriously enough to require medical attention in car accidents each year on U.S. roadways to numbers far closer to zero.

If you buy an autonomous car, it will make you safer. But if a critical mass of people buy the technology, assuming it works, only then do cars because genuinely safe. I often wonder if Elon Musk has some self-destructive condition -- or doesn't like people -- because there is no reason to call a technology Autopilot before Level 4 autonomous driving is reached.People who drive a Tesla are often the upper class which includes politicians. Both the EU and the U.S. have handed out fines in the billions to tech companies in the past. I wish Musk would stop using the name Autopilot to save lives. But even if he stopped only to avoid a huge fine, I expect the result would be beneficial to Tesla drivers, the Tesla company, and the future of autonomous driving.


Comments

Popular Posts