Tuesday, March 19, 2024

Artificial Intelligence: Perception vs. Reality

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Given the hype surrounding artificial intelligence, businesses are now exceptionally eager to deploy this emerging technology. Executives realize that if they don’t deploy AI, they will certainly fall behind their competitors. Yet the reality of AI and ML is that it’s complex and expensive – and can be quite confusing. Plenty of AI projects end up as “expensive science projects” that businesses spend lavishly on, only to realize disappointment.

In this webinar, we discussed:

1) Where is the market in terms of real adoption of ML and AI?

2) The data issues executives need to solve – or at least understand – before considering ML. Advice for getting started?

3) Challenges: What are the unstated assumptions that cause projects to get tripped up?

4) How does miscommunication contribute to AI project stumbles?

5) Ethics and AI: How should executives be thinking about this?

To provide insight into the contradictions in AI, I spoke with two leading experts:

Ya Xue, VP of Data Science, Infinia ML

James Kotecki, VP of Marketing and Communications, Infinia ML

Moderator: James Maguire, Managing Editor, Datamation

Download the podcast:

Is AI Really Mainstream? 

Top Quotes:

Xue: Oh, I think actually companies are using it. There are some well known example like Tesla self-driving and Google, Google search, Apple’s Siri and all the other application, famous applications. Even just talk about those less famous examples like our company, Infinia, we’ve been in business for almost three years. We’ve done over, I would say, over 30 different projects, different application area and different companies. So we’ve seen a lot of actually companies are using it, using AI as a powerful tool to reduce their cost, to improve business efficiency, and then creating real business value.

Kotecki: But there’s also some nuance to this, in that what does it mean to do AI? And how do people define what it means to actually do it? There’s some truth to that 4% metric as well, whatever… It’s your different numbers but it’s always this very low, this paltry number of people that are actually doing AI and getting real business value from it. I think there’s probably a lot more people who are dabbling, who say they’re doing AI, who throw the label of AI on it, because as a marketer, I can say this from a marketing perspective, it’s better to say that you’re doing something sexy in AI than may be something else. And it’s just a term of the moment.

Kotecki: And also people that are doing a lot of things in AI and haven’t figured out how to operationalize it, how to productionize it, how to deploy it in a widespread way that’s actually getting day-to-day value. It’s not just creating a cool algorithm, which we can certainly do. It’s getting all the way to people in your organization actually getting use from it, which involves a lot of things involving AI, and then a lot of steps after that that involve a lot of change management that’s not as easy as writing an algorithm. So it’s a long process that we’re in the middle of now, a long transformation.

The Gap Between AI Experts/Developers and the Executives Who Manage Them

Top Quotes:

Xue: Oh, yes. There are many gaps, actually. So first one is the illusion about AI. So some people have the misunderstanding, like the AI is just a miracle. If you download some off-the-shelf software and you threw the data in, you can get the results you want. So the part that people don’t get is AI is pretty much like any other kind of research and development. It takes time and effort to happen. So that’s one thing, it’s set the right expectation, that’s very important. And another thing executives don’t understand is the consequence of AI success. Yes, it’s a success, and you can make a prediction at 99% accuracy. However, when you deploy it, it’s gonna change your work flow under your business process. Are you ready for it? That’s a bigger question. It has happened multiple times. We develop something, finally you could actually look at, when it’s time to put it into production, oops, we may not be able to do it.

Kotecki: I think there’s a related factor here, which is, I’ve gotten a sense from several past clients, and I think there’s just a sense in the zeitgeist out there that executives would not say this publicly, but a lot of the reason that they might wanna use AI or ML is to replace people, right? Reduce headcount. And they go into projects thinking that that’s what they’re gonna do. Oftentimes, we have seen, just in our narrow corner of the universe, the executives who think that are often then shown that they still need those people to do what Ya is talking about, to do the kind of reviews or they need to reassign those people to higher level projects.

Ethics and AI

Top Quotes:

Xue: I think the first executive have to recognize is there’s a potential risk because it happens. Bias introduced either in design or in the data, mostly from the data, I would say. Data like machine learning algorithm models need to be trained with large amount of data. Then if the data is under-representing or over-representing a certain group, say, it could be a gender group, it could be a racial group, then the machine learning algorithm will learn that, and they embed that information into the model then will produce biased results. So it’s a well-known problem and the machine learning community is working very hard trying to address this issue.

Kotecki: It’s certainly something to be concerned about. Even if you were completely unethical as an executive yourself, you should be concerned about the headline risks. And any time that you see in the headline of a major mainstream publication, the term bias and the term AI, it’s going to be problematic for whatever company is being highlighted in that article. Even though I think it’s also important to remember that when you hear the term bias, it actually doesn’t necessarily have a negative connotation in and of itself in a data science context. Because Ya was hinting at this, you want to maybe bias an algorithm in favor of candidates that are smart, for example, if you’re looking at a job screening application. So it’s not that we necessarily wanna not have any bias, but we wanna understand it and we wanna get rid of bias that is untoward.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles