So-called machine learning and the semantic web go hand in hand, for exploring and exploiting the continuum between structured and unstructured data to connect diverse sources of knowledge on a large scale.
Learn how the Semantic Web is changing the way we treat data at the LinkedData Planet Conference. Sir Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, is among the event’s keynote speakers. |
One expert put it this way: “Technically, people used to make strong distinctions between unstructured data in free text, and structured data that was digested and put into a database that people could use,” says Dr. William Cohen, associate research professor at Carnegie Mellon University’s Machine Learning Department. He’ll be speaking on the topic of using machine learning to discover and understand structured and unstructured data at the LinkedData Planet Conference, June 17-18 in New York.
“But there is a continuum between these. Web sites, for instance, have information with some structure — tables and lists, often derived from an underlying database but presented in a way people can understand. It’s intended for the human user, not the computer,” Cohen says.
For the semantic web’s capabilities to be realized, it needs machine learning to make the connections among these pieces of information in whatever format, and from whatever source, on a large scale. Consider, for example, a large organization that is the product of many acquisitions over the years, where different sub-organizations have different relationships with the same customer, expressed in different formats. It’s a lot of work and technically hard to do to try to understand that customer in the context of the whole organization through traditional rules-engineering approaches, and many of these knowledge engineering approaches fall down with larger and larger sources of data and more diverse sources.
“The way it’s done today, it’s labor intensive and costly. The goal is to do it better, faster, and cheaper, and on a broader scale,” Cohen says.
Machine learning is figuring out what the rules ought to be — for example, putting into a unified format data from two different stores, wherein one store the data may put the customer’s name first and in the other the product you sell to it. But usually the complexities aren’t as easily resolved, so writing a rule that makes two entries look exactly the same to put into a database and have a consistent set of keys and a consistent user experience could be time-consuming and difficult. However, most likely there is some sort of tag, or metadata, that conclusively identifies an item, like a SKU.
“So you can look at those IDs and say these objects are probably the same because they have this consistent ID, and from those you can figure the mapping out to be this,” Cohen says. “A person would do this but the key thing is to get the machine to do the same thing….to automatically figure out what the rules ought to be for all 100 companies you deal with, and there’s no process that involves human labor. If you can do that, it’s a huge win.”
Not without complications, however, especially in its implications for the infrastructure.
“If you use machine learning to construct these rules, that forces you to come to grips with the fact that some rules, because they are learned from data, will be inaccurate,” says Cohen. Which could have consequences affecting the entire business cycle — ordering, billing, supplying. And scalability has to be a much greater consideration. “There is a lot of work on things that work well for 10,000 data points but we’re ten years off from having them work on 100 million data points,” he says — and ten years from now we’ll probably be closer to 10 billion data points, anyway. “The amount of data is growing very quickly, so the technologies we work on, we have to really understand their scalability.”