Predictive analytics uses a large and highly varied arsenal of techniques to help organizations forecast outcomes, techniques that continue to develop with the widening adoption of big data analytics. Predictive analytics examples include technologies like neural networking, machine learning, text analysis, and deep learning and artificial intelligence.
Today’s trends in predictive analytics mirror established Big Data trends. Indeed, there is little true difference between Big Data Analytics Tools and the software tools used in predictive analytics. In short, predictive analytics technologies are closely related (if not identical with) Big Data technologies.
With varying degrees of success, predictive analytics techniques are being to assess a person’s credit worthiness, revamp marketing campaigns, predict the contents of text documents, forecast weather, and develop safe self-driving cars.
Predictive Analytics Definition
Predictive analytics is the art and science of creating predictive systems and models. These models, with tuning over time, can then predict an outcome with a far higher statistical probability than mere guesswork.
Often, though, predictive analytics is used as an umbrella term that also embraces related types of advanced analytics. These include descriptive analytics, which provides insights into what has happened in the past; and prescriptive analytics, used to improve the effectiveness of decisions about what to do in the future.
Starting the Predictive Analytics Modeling Process
Each predictive analytics model is composed of several predictors, or variables, that will impact the probability of various results. Before launching a predictive modeling process, it’s important to identify the business objectives, scope of the project, expected outcomes, and data sets to be used.
Data Collection and Mining
Prior to the development of predictive analytics models, data mining is typically performed to help determine which variables and patterns to consider in building the model.
Prior to that, relevant data is collected and cleaned. Data from multiple sources may be combined into a common source. Data relevant to the analysis is selected, retrieved, and transformed into forms that will work with data mining procedures.
Techniques drawn from statistics, artificial intelligence (AI) and machine learning (ML) are applied in the data mining processes that follow.
AI systems, of course, are designed to think like humans. ML systems push AI to new heights by giving computers the ability to “learn without being explicitly programmed,” said renowned computer scientist Arthur Samuels, in 1959.
Classification and clustering are two ML methods commonly used in data mining. Other data mining techniques include generalization, characterization, pattern matching, data visualization, evolution, and meta rule-guided mining, for example. Data mining methods can be run on either a supervised or unsupervised basis.
- Also referred to as supervised classification, classification uses class labels to place the objects in a data set in order. Generally, classification begins with a training set of objects which are already associated with known class labels. The classification algorithm learns from the training set to classify new objects. For example, a store might use classification to analyze customers’ credit histories to label customers according to risk and later build a predictive analytics model for either accepting or rejecting future credit requests.
- Clustering, on the other hand, calls for placing data into related groups, usually without advance knowledge of the group definitions, sometimes yielding results surprising to humans. A clustering algorithm assigns data points to various groups, some similar and some dissimilar. A department store chain in Illinois, for example, used clustering to look at a sale of men’s suits. Reportedly, every store in the chain except one experienced a revenue boost of at least 100 percent during the sale. As it turned out, the store that didn’t enjoy those revenue gains relied on radio ads rather than TV commercials.
The next stage in predictive analytics modeling involves the application of additional statistical methods and/or structural techniques to help develop the model. Data scientists often build multiple predictive analytics models and then select the best one based on its performance.
After a predictive model is chosen, it is deployed into everyday use, monitored to make sure it’s providing the expected results, and revised as required.
|Application||Technique||Type(s) of data analyzed|
|Approving or denying loans||Decision tree||Quantitative and qualitative|
|Rating customers’ credit||Multiple linear regression||Quantitative|
|Predicting topics of documents||Topic modeling||Text|
|Figuring out public opinion||Sentiment analysis||Text|
|Forecasting the weather||Math calculations/neural nets||Quantitative time series|
|Decisions by self-driving cars||Deep learning/neural nets||Images|
Different predictive analytics techniques are best suited to analyze various types of data.
List of Predictive Analytics Techniques
Some predictive analytics techniques, such as decision trees, can be used with both numerical and non-numerical data, while others, such as multiple linear regression, are designed for quantified data. As its name implies, text analysis is designed strictly for analyzing text.
Decision tree techniques, also based on ML, use classification algorithms from data mining to determine the possible risks and rewards of pursuing several different courses of action. Potential outcomes are then presented as a flowchart which helps humans to visualize the data through a tree-like structure.
- A decision tree has three major parts: a root node, which is the starting point, along with leaf nodes and branches. The root and leaf nodes ask questions.
- The branches connect the root and leaf nodes, depicting the flow from questions to answers. Generally, each node has multiple additional nodes extending from it, representing possible answers. The answers can be as simple as “yes” and “no.”
Much enterprise data is still stored neatly in easily queryable relational database management systems (RDBMS). However, the big data boom has ushered in an explosion in the availability of unstructured and semi-structured data from sources such as emails, social media, web pages, and call center logs.
To find answers in this text data, organizations are now experimenting with new advanced analytics techniques such as topic modeling and sentiment analysis. Text analytics uses ML, statistical, and linguistics techniques.
- Topic modeling is already proving itself to be very effective at examining large clusters of text to determine the probability that specific topics are covered in a specific document.
- To predict the topics of a given document, it examines words used in the document. For instance, words such as hospital, doctor, and patient would result in “healthcare.” A law firm might use topic modeling, for instance, to find case law pertaining to a specific subject.
- One predictive analytics technique leveraged in topic modeling, probabilistic latent semantic indexing (PLSI), uses probability to model co-occurrence data, a term referring to an above-chance frequency of occurrence of two terms next to each other in a certain order.
Sentiment analysis, also known as opinion mining, is an advanced analytics technique still in earlier phases of development.
- Through sentiment analysis, data scientists seek to identity and categorize people’s feelings and opinions. Reactions expressed in social media, Amazon product reviews, and other pieces of text can be analyzed to assess and make decisions about attitudes toward a specific product, company, or brand. Through sentiment analysis, for example, Expedia Canada decided to fix a marketing campaign featuring a screeching violin that consumers were complaining about loudly online.
- One technique used in sentiment analysis, dubbed polarity analysis, tells whether the tone of the text is negative or positive. Categorization can then be used be used to hone in further on the writer’s attitude and emotions. Finally, a person’s emotions can be placed on a scale, with 0 meaning “sad” and 10 signifying “happy.”
- Sentiment analysis, though, has its limits. According to Matthew Russell, CTO at Digital Reasoning and principal at Zaffra, it’s critical to use a large and relevant data sample when measuring sentiment. That’s because sentiment is inherently subjective as well as likely to change over time due to factors running the gamut from a consumer’s mood that day to the impacts of world events.
Simple Statistical Modeling
Statistical techniques in predictive analytics modeling can range all the way from simple traditional mathematical equations to complex deep machine learning processes running on sophisticated neural networks. Multiple linear regression is the most commonly used simple statistical method.
- In predictive analytics modeling, multiple linear regression models the relationship between two or more independent variables and one continuous dependent variable by fitting a linear equation to observed data.
- Each value of the independent variable x is associated with a value of the dependent variable y. Let’s say, for example, that data analysts want to answer the question of whether age and IQ scores effectively predict grade point average (GPA). In this case, GPA is the dependent variable and the independent variables are age and IQ scores.
- Multiple linear regression can be used to build models which either identify the strength of the effect of independent variables on the dependent variable, predict future trends, or forecast the impact of changes. For instance, a predictive analytics model could be built which forecasts the amount by which GPA is expected to increase (or decrease) for every one-point increase (or decrease) in intelligence quotient.
However, traditional ML-based predictive analytics techniques like multiple linear regression aren’t always good at handling big data. For instance, big data analysis often requires an understanding of the sequence or timing of events. Neural networking techniques are much more adept at dealing with sequence and internal time orderings. Neural networks can make better predictions on time series information like weather data, for instance. Yet although neural networking excels at some types of statistical analysis, its applications range much further than that.
In a recent study by TDWI, respondents were asked to name the most useful applications of Hadoop if their companies were to implement it. Each respondent was allowed up to four responses. A total of 36 percent named a “queryable archive for nontraditional data,” while 33 percent chose a “computational platform and sandbox for advanced analytics.” In comparison, 46 percent named “warehouse extensions.” Also showing up on the list was “archiving traditional data,” at 19 percent.
- For its part, nontraditional data extends way beyond text data such social media tweets and emails. For data input such as maps, audio, video, and medical images, deep learning techniques are also required. These techniques create layer upon layer of neural networks to analyze complex data shapes and patterns, improving their accuracy rates by being trained on representative data sets.
- Deep learning techniques are already used in image classification applications such as voice and facial recognition and in predictive analytics techniques based on those methods. For instance, to monitor viewers’ reactions to TV show trailers and decide which TV programs to run in various world markets, BBC Worldwide has developed an emotion detection application. The application leverages an offshoot of facial recognition called face tracking, which analyzes facial movements. The point is to predict the emotions that viewers would experience when watching the actual TV shows.
The (Future) Brains Behind Self-Driving Cars
Much research is now focused on self-driving cars, another deep learning application which uses predictive analytics and other types of advanced analytics. For instance, to be safe enough to drive on a real roadway, autonomous vehicles need to predict when to slow down or stop because a passenger is about to cross the street.
Beyond issues related to the development of adequate machine vision cameras, building and training neural networks which can produce the needed degree of accuracy presents a set of unique challenges.
- Clearly, a representative data set would have to include an adequate amount of driving, weather, and simulation patterns. This data has yet to be collected, however, partly due to the expense of the endeavor, according to Carl Gutierrez of consultancy and professional services company Altoros.
- Other barriers that come into play include the levels of complexity and computational powers of today’s neural networks. Neural networks need to obtain either enough parameters or a more sophisticated architecture to train on, learn from, and be aware of lessons learned in autonomous vehicle applications. Additional engineering challenges are posed by scaling the data set to a massive size.