Saturday, October 5, 2024

Qualcomm’s AI-Driven Video Compression And Ensuring that AI’s Do No Evil

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This month Qualcomm had a fascinating presentation on the advancements they have made with applied AI and video. These advancements are incredibly attractive to users who want to lower their data charges but still take and share great videos from their smartphones.

I spent some time in law enforcement, and videos, like those captured during the recent attack in Washington, are being used as evidence. Could future videos be disqualified as evidence because an AI like Qualcomm’s broke the chain of evidence? The answer is yes, but the risk can be mitigated in several ways.

I think this problem, as we adapt AI technology, could be much broader than just video. But let’s start there this week and then talk about the more significant AI threats that Microsoft President Brad Smith brought up at CES.

AI For Video Capture

I just did a video for a client, and the upload of the 4K video took longer even on my ultra-fast network than it took to tape the spot. I don’t want to think how long it would have to upload a similar video using anything less than 5G. Videos are often compressed on cell phones to reduce that time but, with AI, such compression can be increased substantially.

The AI knows what in the image has changed and what hasn’t and can selectively choose not to send any part of the image that hasn’t changed, further compressing the video stream. The result is indistinguishable from the source, but the AI does alter the image captured, and the chain of evidence does not currently allow for such alteration. Most of the laws I’m aware of involving a chain of evidence proceed and did not anticipate this technology.

The easy fix to avoid losing otherwise valid evidence would be to update the laws, but now let’s advance this technology further. Let’s say we were using utilities, much like we have for still pictures for removing red-eye, changing closed eyes to open eyes, adjusting the eyes, so they look at the camera, or putting smiles on people that aren’t smiling.

Now let’s say you are filming a confrontation between a police officer and a protestor and have these utilities active because you were filming your family. Instead of showing the officer fear for their lives and acting defensively (assuming that is the reality), the video would show a happy officer gleefully shooting a protestor. Now you have a severe evidence problem because the evidence is false, but the person taking the shot may not be aware that it was altered. Eyewitnesses are unreliable.

Now a change in the law won’t fix that. Still, a change in the process assuring that at least one frame in 30 or 60 was accurate. And you could not only validate the accurate frame but also know that the video was altered, and return it to a reasonably unaltered state to fix that problem.

Going Beyond Videos

I use an AI editing program called Grammarly.  It generally does a decent job, but its edit changes the entire meaning of a sentence from time to time. Now let’s say that after the attack on the capital, out of concern for a Senator, I wrote a quick note that was poorly structured, stating legitimate concern about that Senator’s safety and the AI, to help me out, made the sentence read better but now it reads like a threat. Once the FBI shows up, how do I prove the AI did that and not me?

And, as AIs get more capable, given the level of attacks we are already seeing, there is an increased risk the AI could be hacked to create problems like this intentionally. If you think folks aren’t that twisted, read up on Swatting and Cyber Bullying.

As we implement AIs to assist us in our work, we need to be conscious of how they could be used against us or accidentally act against our best interests and mitigate those threats.

Wrapping Up

As it currently stands, the current Qualcomm implementation could be defended, given the result will virtually be indistinguishable from the source.  But allowing the videos into evidence as altered may require a change in the laws surrounding video evidence since they did not take this technology into account when written.

But as this technology advances, like the concerns surrounding Deep Fakes, you could end up with false evidence, which could corrupt the trial process and lead to a verdict inconsistent with what happened. Also, as we advance AIs, we need to update the laws surrounding the activities AIs impact, we need to assure better these AI’s don’t act accidentally or on purpose against our best interests.

Brad Smith, Microsoft’s President, called this out in his keynote at CES, and I agree with him that ensuring the integrity and safety of AIs needs to go hand in hand with creating them, or we’ll likely regret the eventual outcome.

We typically wait for a problem to become evident before moving to address it; with AIs moving at computer speeds, we should get ahead of that problem this time.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles