Friday, November 8, 2024

How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The closest thing to a general-purpose AI is likely to come out of the efforts to allow cars to drive themselves.  This result is because this will be, when done, the first in-depth use of an AI to replace a human doing a complex task entirely.  In addition to being capable, this effort is focused on saving around 36K human lives a year (the 2019 number of automobile deaths) when fully deployed. Hence, it has importance beyond just AI.

However, today I’d like to focus on the RSS (Responsibility-Sensitive Safety) component of Autonomous Driving because it will likely form the core of AI safety development for the future.

RSS And The 3 Laws Of Robotics

In his book “I, Robot,” Isaac Asimov created what is now believed to be the foundation for how an autonomous machine should behave. He created the three laws of Robotics, and they are:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

These laws have grounded our thinking on creating AIs for the future, and RSS, an approach championed by Intel, is consistent with these laws.  RSS, which is currently focused on automobiles, specifies that a car may not hit the car it is following, it must not behave recklessly, that right of way must be given and not taken, that is must be careful when visibility is compromised, and that if it is safe to avoid a crash, the car must do so.

Part of the influence on this effort is the surveys done with potential buyers of this technology, and some of those survey results are conflicted.  For instance, 76% of those surveyed indicated they favored a decision tree where pedestrian safety was favored over the driver’s safety.  But when it came to their car, they wouldn’t buy one that didn’t put their life first. In another survey, the folks surveyed said they wanted the car’s decision tree to focus, in an accident, on killing the least people, except when it was their car, then it should favor them.

The efforts surrounding autonomous cars concerning standards are, as you might expect, significant. Part of the problem is defining the manufacturers’ control and what is in the governments. And the government doesn’t seem to want to take responsibility, making this process more difficult.  For instance, if a stoplight was to miss function and somehow communicate the wrong thing to the car, the municipality still places the blame and more of the vehicle’s decision load.

The industry has currently agreed that the industry will define the parameters, but the governments will define the values. How this works today is a car manufacturer can create a car that will do over 300 MPH but speed limits, specified by the government, factor in risk and set what is legally allowed.

RSS is designed to help regulators define an acceptable risk/reward balance, and then the cars will adhere to that balance.  This risk/reward balance is critical and not that different from what the FAA did in 1997 with the aviation safety team, resulting in a 95% decrease in aviation fatalities.

This progression means that when Autonomous Driving takes off around mid-decade, it will come to market with a rich set of rules that could then be applied to robots and AIs everyplace. These developing rules prioritized risk analysis and balance over unattainable absolutes. Now we will also discover the tolerance for deaths because even if the reduction is deaths were two magnitudes, going from 360 to 36K deaths a year, there would initially be a tendency to point to the 360 deaths as a problem with the technology, forgetting about the 35k+ people that were saved.

But these now defined rules and practices that create a balance between safety and capability could then be applied to other AIs that would then be far safer as a result.   In effect, autonomous vehicles are paving the way to our AI future.

General Purpose AI

Autonomous cars will be the closest thing we have to a general purpose AI when they take the road around mid-decade.  They will create a foundation for the AIs and Autonomous Robots that establish a process to balance risks and capabilities to keep humans safe.

The positive impact will range from personal robotics in the home to AIs that control everything from Smart Cities to the autonomous medical equipment in hospitals and remotely located in homes. Getting this RSS risk/reward balance right will be critical to the success of the coming wave of General Purpose AIs, and we’ll have autonomous cars to thank for plowing that field.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles