The AI Index is in-depth annual report about the current state of AI. It reports on autonomous systems, research and development, the AI economy, public perception of AI, and – perhaps most important – technical performance of artificial intelligence. Published by Stanford University, it is arguably the most complete report on the state of AI today.
To get insight on this recently published report, I’ll speak with Jack Clark, policy director at OpenAI, and a member of the steering committee for the AI Index.
Download the podcast:
Edited Highlights:
AI Development is Now Global
Clark: “What surprised me was just how equal various international players are. If you look at the US and you look at China, they are much closer to peers as AI research entities. And I think that that’s surprising, I think many people assume that the US had some kind of massive lead that differentiated it from everywhere else in the world. If you drill down, you discover that there’s actually just a bunch of peer nations or sets of nations with equal capabilities.
“It means that this is not just an academic exercise, it’s not just a thing being pursued by a small number of narrow commercial interests either. It means it’s being pursued by this global plurality of organizations and groups, which means the effects are going to be weird.
“But with all of the societal changes that are making politicians realize that AI is going to be a big deal, the question is: what happens when they realize it’s a big deal and want to do something about it? I don’t think anyone’s quite sure of that.”
China vs. US in Artificial Intelligence
Clark: “It’s a significant competition. And I’m going to use this question to complain about something.
I go to Washington DC once every five or six weeks. I go there a lot, and I’ve been doing that for two years because I care a lot about what the US does with regard to AI policy because it will affect the world. And the line I hear in Washington DC is, that Chinese can’t invent things. They just steal things.
“And, I’m sitting there as someone who reads these research papers and I’m like, ‘No, that’s very wrong.’
“What we’re seeing with Chinese papers is, they’re reaching to become a peer on publications, but also the quality of their papers is increasing. It’s a bit less quality than the overall US and European ecosystems, as analysis from Elsevier shows, but it’s edging up there as well.
“When you look at the amounts of money being invested, both the US and China seem to be on par in terms of aggressive investments, way ahead of other countries. And I do think that these two countries will lead a lot of things happening in AI purely due to their scale.”
AI Conference Attendance Up Sharply
Clark: “[A few years back] was a time where you could feel that the money [in AI] had arrived and it was starting to take off. And I think the reason why it’s continued is that this technology actually works, in some areas. Home security cameras, perceiving weather via satellite. There are lots of economic cases where AI’s begun to mean something, and so the attendance is going to continue to grow.
“I think what this tells us is that AI is becoming integrated into the economy and we can see attendance of these conferences as a proxy for that.”
Growing Interest in AI
Clark: “We measure Google search interest for cloud, big data and all of this stuff. And we do this in conference call details. And what we see, is that businesses are mentioning this more than cloud computing and big data and that can seem like, “Oh, yeah, well, that’s just sort of [expletive] signaling on the part of businesses.
“But actually as you remember when the companies were telling us they were investing in big data or they were investing in cloud, they were standing up like huge teams of people to actually do stuff. And even if it didn’t lead to exactly the benefits they thought they’d get, it changed a lot of these companies, and it also changed the technology infrastructure we work with today.
“And so if this happens with AI? Yeah, a lot of money is gonna get wasted, but some of it’s going do really wild things.”
The Stunning Recent Advances in AI
The AI Index report states that:
- In a year and a half, the time required to train a network on cloud infrastructure has fallen from about three hours in October 2017, to about 88 seconds in July 2019.
- The amount of computation used in the largest AI training runs, has doubled every three point four months since 2012.
Clark: “It’s weird, this is all extremely weird. I say this in my presentations. It’s like doubling rate of 3.4 months on computation, bloody strange. [The past] 18 to 24 months has changed all of civilization, the implications of this are huge.
“And algorithmic improvements which save on training efficiency. So what’s actually happened is we’re seeing the integration of new research insights into a well-understood, economically useful task, like image recognition to bring that time down. That’s amazing. That’s not like frontier research. That is the integration of existing research into a sort of economically meaningful AI task.
“And then on the other side, you have these growing computational expenditures. Now, those are more like you’re trying to run experiments of the largest possible scale to understand how these things behave at that scale because, within a few years, that scale will be really cheap.
“In the same way that this thing costs less over time to run on a cloud, or how that time to train will improve, that’s going happen with the big [AI] stuff. And so you want it to train the largest possible systems you can now, because you’re going to have the best possible intuition about the longest part of the future then.”
Advances in AI Hardware
Clark: “What we’ve observed is the usability of GPUs is really, really hard to beat. So you might have hardware that has better specific performance traits. If you’re maybe a big business or actually a start-up running a very specific use case, the economics might work out, but using different hardware saves on your costs.
“But because we’re a research lab, we kind of want to do everything. So for us, that almost biases us mostly toward commodity infrastructure like GPUs because it’s got the greatest level of flexibility. And I think that this trade-off is going be difficult.”
AI and the Job Market
Clark: “I think that AI leads to massive technological change. I think if you look at history, technological change usually leads to revolution, war, or economic collapse. And so I think that the effects of AI are going to be extremely profound and I think the effect on the workforce could be extremely messy.
“I think what we need to do, and what inspires me to work on this is, I have an almost naive belief that if we get better data in front of policymakers and call for the creation of original data on things like AI and employment, we can probably deal with this disruption better.
“Right now we’re kind of blind. The reason why there isn’t more data in the index itself is getting this data is very hard. It requires you to do decent censuses at the level of individual industries in your country. The US is starting to do stuff here but we don’t have the ability to generate the data that would let us easily say if there was a problem or not, and that’s a big problem.
“So I’m kind of saying to policymakers, let’s accept that there’s a change and then let’s point at certain job classes which are going massively grow and which we can actually train people into and make them good. I think that is a choice we have.”
AI and Ethics
Clark: “I’m trying to think of something which walks the tightrope between being extreme and being not to scaremonger, so let me come up with the right example.
“I’ve got a good example. In 2015, a Canadian start-up announced via a press release that it had successfully synthesized horse pox and it announced this because it wanted to advertise some decent, sort of sophisticated chemical analysis and creation capabilities it has. Now the problem here is if I can make horse pox, I can also make smallpox in the same quantity, which is a bad thing. Governments have actually invested quite a lot of money in protecting the world from smallpox. We’re all happy about this and it led to governments having a chat with that start-up.
“We’re in that position with AI today. I think most AI developers are going to try and do things for good reasons and then they’re going to announce like, ‘Oh we’ve just released a system that can make any drone navigate to a specific face that you give it. So we’ve done that for sports.’ Except obviously, that’s not going just get used for sports. It’s going get used for all kinds of things.
“What we need to do is, we need to create culpability among developers. Earlier this year, OpenAI had a big text generation system called GPT-2. We made ourselves culpable for it by saying at the start, “We think this could have weird policy effects.’ We slowly released it through the year while publishing research on disinformation, on detection, on threats of GPT-2, and we did that because we wanted to show that as an AI developer, we can also try and think about this stuff.
“Now thinking [by itself] is insufficient for a scale of the challenge AI poses to society. But you need to start somewhere and I think where we need to start today. The AI community needs to have real conversations about the impact of what it’s doing, and if we have more of that, that might lead to ethics that becomes more restrictive in a beneficial way.”