Tuesday, May 18, 2021

The Huge Data Problems That Prevented A Faster Pandemic Response

Huawei has been running a series of roundtables talking about the enormous problems that need to be addressed to make the world a better and safer place. This week the topic was on “Unleashing Innovation Through Collaboration,” and the framework was the COVID-19 response.

On the panel were Susan Athey, a professor and Stanford graduate who wrote The Economics of Technology, Antony Walker, Deputy Chief Executive Officer for TechUK, Martina Szabo, Business Development and Strategy Lead for the World Economic Forum, and Andy Purdy CSO for Huawei Technologies USA. The panel was moderated by Simon Cox, Emerging Markets Editor for the Economist (and he did an excellent job moderating).

Early in the year, when it first became clear we faced a pandemic, technology companies worldwide stepped up and pledged resources that should have been able to find the best ways to balance economic impact and safety. But while the response was faster than the 1918 Flu Virus response, it wasn’t that much faster, and tons of mistakes should have been avoidable given we have massive modeling capability. At the heart of the problem wasn’t the lack of data; it was the inability to get to that data and analyze it in a timely basis.

Let’s talk about what went wrong and what companies and governments should be doing to speed up the response, so the next outbreak isn’t as catastrophic.

Lack Of Simulation-Based Planning

When you have a massive health crisis, the ability to access and analyze data at a global scale is critical to identifying remedies and collateral damage from those remedies, and developing manufacturing and distribution methods. Had that analysis been done quickly, the world’s governments would have realized that in addition to looking for remedies and anti-virus formulations, they also needed to increase manufacturing and work out delivery and application logistics massively.

According to the panel, these weren’t done in a timely basis, and now estimates are that we won’t have enough manufacturing capacity to treat the world for another 18 months. We don’t have enough people to administer the vaccine, even if we could get that vaccine manufactured.

We can do simulations on a world scale. NVIDIA has showcased they can do simulations at a galaxy scale. Still, we didn’t do those simulations, so the manufacturing and logistics problems weren’t identified until way too close to discovering the vaccines, resulting in a considerable number of avoidably lost lives.

Data Friction

Even before massive patient privacy efforts like GDPR, we had massive problems concerning getting access to large amounts of patent data. With GDPR, it has gotten worse with additional bureaucracy scientists having to drill through to get the necessary access to determine remedies and cures for any disease, including pandemics.

Granted, any mass access to data would need to anonymize the individuals. Without access to the data, determinations on things like viable medications and side effects are impossible to do timely. Answers that should take seconds with today’s competent AIs and analysis tools can take months or years because of data incompatibilities and restrictions against even safe access.

This problem is being worked on with great intensity. For instance, the Wold Bank has freed up billions of dollars to help countries provide more consistent access to data during world crises like this pandemic. But other concerns remain, like the concern that one country’s efforts to increase manufacturing in another country could be nationalized in the country doing the manufacturing.

Another complaint is that in places like the US, a bad actor can get easier access to data by merely getting access to a person’s Social Security number and birth date (which aren’t that secure). Still, a legitimate effort to counter a pandemic must go through a much longer and more complicated process. That may partially explain why foreign countries were believed to be executing data theft; they didn’t have the time during a pandemic to endure the excessive friction of proper channels.

Yet at the same time, we need to provide better and more comprehensive data access. We also need to provide data security so that it is no longer easier to steal data than to acquire it legally.

Wrapping Up

Given how much we’ve advanced over the last century about data analytics, it is surprising how long it took us to create an anti-virus and that we still can’t manufacture and distribute it in a timely basis. Some current projections have us losing over 20K people a week by the end of the year in the US.

Before we get into the next world catastrophe, it will be critical we fix the database issues that currently prevent us from analyzing at a global level.  Also, we need to learn to model out the response so that we build capacity and fund the healthcare systems so that the world can get a cure and apply it as rapidly as possible to minimize the resulting deaths.

As we move to AI, we have to realize that the capabilities of these AIs will be limited by errors in their creation, the bias in the data, and a lack of data access. Addressing the first is part of any credible development effort today; leaders like IBM are working to address the second, but data access is outside the control of tech companies and needs to be addressed by governments. Some of that is going on, but not nearly enough, and if that isn’t corrected, preventing the next pandemic may remain impossible.

Thanks to Huawei for putting these events on, as they help provide a framework for assuring a better future for all of us.

Similar articles

Latest Articles

Managed Security Services Provider...

COMMERCE, Mich. — A managed security services provider (MSSP) is rolling out a way to help cybersecurity executives get a better view of their...

What is Data Annotation?

You've completed a hefty round of raw data collection, and now you want to feed that information into artificial intelligence (AI) machines, so they...

How IBM has Changed...

Think is IBM’s big annual conference, and again this year, it was digital. I’m noticing a sharp quality difference in shows like this where...

Database-Tuning Platform Launches and...

PITTSBURGH — A team out of Carnegie Mellon University is launching its automatic database-tuning product today with the help of $2.5 million in funding.   OtterTune,...