Thursday, October 3, 2024

Personalization: Coming Full Circle, Part 1

Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

We’ve come a long way in personalization. So far, in fact, that in many ways the industry is right back to where it started.

Once a carefully defined (as well as manually intensive and very expensive) niche strategy, personalization has recently become a grab bag of uncoordinated, incompatible, and overlapping tactics aimed at general consumers.

Lured by the bold promises of one-to-one marketing and “mass customization,” many businesses embraced personalization without a strategic framework for
investment and without understanding either the value of customers over time or the value of the Web as a channel to influence their behavior.

In many cases (as with so many doomed customer relationship management, or CRM, implementations), companies have undertaken personalization initiatives
without even a management commitment to customer supremacy, organizational incentives, or infrastructure.

After so many failed — or at least disappointing — experiments with click-stream analytics, rules-based personalization, and data mining solutions, businesses are
realizing that the complexities of individual customers aren’t readily captured and gleaned from Web servers logs or huge data warehouses — no matter how fancy or
expensive the analytics.

Given the tremendous investment required to get a meaningful look at customers, businesses are now focusing on the concept of “service to value.” In essence, they
are returning to the belief that not all customers are created equal, and that some deserve and need higher levels of profiling and service delivery.

In this, the first of a two-part column, I will set the historical backdrop of personalization and discuss the first of three categories of personalization technologies and
how they’ve impacted business practices.

The Good/Bad Old Days

Once upon a time, before corporate managers believed that technology was the answer to everything, personalization was a niche strategy. This strategy involved
paying very close and careful attention to what were called “regular customers” — the biggest and best repeat buyers who generated the bulk of the revenue (the
proverbial 20 percent that provide 80 percent of the business).

So businesses devoted a lot of time, energy, and attention to these regular customers. Sales managers took it upon themselves to teach other employees about their
needs and interests. On any given day, ordinary customers would come and go, but when Ms. Regular Customer showed up at the store, everybody jumped and
focused on meeting her concerns.

Providing such special treatment was expensive, but the investment was worth it. Nobody had to perform a return on investment (ROI) or cost benefit analysis to
justify the added expense and effort. Clearly, it made good business sense.

The Advent of Database Marketing

Then technology came along, with the promise of “proactively managing consumer relationships through developing customer intimacy, anticipating their needs, and
delivering unique shopping and service experiences.” Or some such drivel.

It started in the 1980s and was called database marketing. It involved dividing consumers into discrete segments based on analyzing their purchases (usually credit
card transactions), credit history, and other financial data, as well as standard demographic and new “lifestyle” or “psycho-graphic” information.

Based on these analyses, customers were labeled, categorized, typecast, and pigeonholed into groups called market segments. These groups were then solicited
relentlessly, based on some statistician’s assumption of what “we” liked.

The success of such efforts? Limited. The targeting criteria were simplistic and primitive, and many of the assumptions were just plain wrong. It’s no accident that
these efforts coincided with a consumer backlash in the form of consumer protection legislation against telemarketing and direct response selling.

Then in the 1990s came the Internet, which, along with advances in data storage, analysis and communications, brought new advances in personalization technology.
I’ve grouped the major technologies in three categories, ranked in increasing complexity and cost:

  • Simple Web-based recommendation engines and click-stream analytics
  • Business rules-based systems (which can be across several sales channels)
  • Advanced data analytics and data warehousing

Simple Web-Based Analytics

One of the great beauties of the Web is that every move your prospect makes — every link or banner clicked on, every search conducted — can be meticulously and
cheaply recorded merely by configuring the Web server’s log file to record such data. Perhaps better still, individual customers can be “recognized” when
they return to your site, either through techniques using users’ computers (cookies) or your servers (user agent identification).

The good news: businesses could accumulate a wealth of data about online prospects. The bad news: as with most offline customer data, most organizations really
didn’t know what to do with all this information.

Yes, businesses could produce fancy reports of how many pages were served, who looked at them, and what content they viewed. But the online reports provide a
very limited view of who these people are and what they want, rendering their economic value unclear.

One of the most promising Web-based personalization technologies has been collaborative filtering. This relatively inexpensive technology compares information and
identifies behavioral patterns through a simple (some would argue shallow) analysis of data relationships.

Collaborative filtering operates on the assumption that groups of users share similar tastes. If you like product A, you’ll probably like product B, which many product
A buyers have also bought. While this rule doesn’t hold up for many product and service categories, it has been successfully used for books and movies.

By collecting expressed preferences of groups of users, collaborative filtering can be an effective recommendation engine (although in most applications it’s used to
serve targeted content). It’s relatively easy and cheap. And because it is based on expressed preferences — either provided explicitly in online forms, responses to
inquiries or category searches, or implicitly by recording the pages users access and products they buy — the technology provides richer and more adaptive
personalization than simple market demographics.

The downside is that it results in only rough categorization of customers, from an extremely product-centric view (it records only preferences for certain products,
not why customers prefer them).

And if context is not factored in the approach (such as whether the purchase is intended as a gift for someone else), the personalization results can be horrendous.
Just imagine the recommendations a collaborative filtering engine could generate for a classical music buff after he buys an Eminem CD for his nephew’s birthday.

In Part 2 tomorrow, I’ll discuss the advantages and disadvantages of rules-based personalization and advanced analytical and data mining solutions, as well as where
personalization as a strategy is headed.

Arthur O’Connor is a senior manager in the financial services practice of KPMG Consulting, specializing in customer strategy as well as related architectural and organizational issues. An author, speaker, and consultant, he focuses on customer relationship management and eCRM.

Subscribe to Data Insider

Learn the latest news and best practices about data science, big data analytics, artificial intelligence, data security, and more.

Similar articles

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Latest Articles