Learn how the Semantic Web is changing the way we treat data at the LinkedData Planet Conference. Sir Tim Berners-Lee, inventor of the World Wide Web and director of the W3C, is among the event's keynote speakers.
One expert put it this way: Technically, people used to make strong distinctions between unstructured data in free text, and structured data that was digested and put into a database that people could use, says Dr. William Cohen, associate research professor at Carnegie Mellon Universitys Machine Learning Department. Hell be speaking on the topic of using machine learning to discover and understand structured and unstructured data at the LinkedData Planet Conference, June 17-18 in New York.
For the semantic webs capabilities to be realized, it needs machine learning to make the connections among these pieces of information in whatever format, and from whatever source, on a large scale. Consider, for example, a large organization that is the product of many acquisitions over the years, where different sub-organizations have different relationships with the same customer, expressed in different formats. Its a lot of work and technically hard to do to try to understand that customer in the context of the whole organization through traditional rules-engineering approaches, and many of these knowledge engineering approaches fall down with larger and larger sources of data and more diverse sources.
The way its done today, its labor intensive and costly. The goal is to do it better, faster, and cheaper, and on a broader scale, Cohen says.
Machine learning is figuring out what the rules ought to be -- for example, putting into a unified format data from two different stores, wherein one store the data may put the customers name first and in the other the product you sell to it. But usually the complexities arent as easily resolved, so writing a rule that makes two entries look exactly the same to put into a database and have a consistent set of keys and a consistent user experience could be time-consuming and difficult. However, most likely there is some sort of tag, or metadata, that conclusively identifies an item, like a SKU.
So you can look at those IDs and say these objects are probably the same because they have this consistent ID, and from those you can figure the mapping out to be this, Cohen says. A person would do this but the key thing is to get the machine to do the same thing .to automatically figure out what the rules ought to be for all 100 companies you deal with, and theres no process that involves human labor. If you can do that, its a huge win.
Not without complications, however, especially in its implications for the infrastructure.
If you use machine learning to construct these rules, that forces you to come to grips with the fact that some rules, because they are learned from data, will be inaccurate, says Cohen. Which could have consequences affecting the entire business cycle -- ordering, billing, supplying. And scalability has to be a much greater consideration. There is a lot of work on things that work well for 10,000 data points but were ten years off from having them work on 100 million data points, he says -- and ten years from now well probably be closer to 10 billion data points, anyway. The amount of data is growing very quickly, so the technologies we work on, we have to really understand their scalability.