8 Lead Scoring Pitfalls You Must Avoid For An Accurate Model
In our company’s work with marketing automation clients, optimising lead scoring is a critical for ensuring they make the most of their technology investment. When done well, it can increase lead conversion rates, improve marketing’s relationship with sales, and lead to greater customer satisfaction.
The best-performing marketing teams use lead scoring. In fact, Aberdeen research shows that 68% of best-in-class companies use lead scoring in comparison with 28% of laggard firms (State of Marketing Automation 2014: Processes that Produce, 2014).
However, creating an accurate lead scoring model is a challenge and, without enough preparation, may lead to inaccurate scores – and buyers falling through the cracks.
Here are 8 examples of lead scoring pitfalls B2B companies often fall into – and some info on how to avoid them!
1. Asking for BANT information in forms
BANT = budget, authority, need and timeline – the common attributes used to determine sales-readiness.
BANT helps identify prospects that are in the buying process, especially for B2B companies. However, prospects often provide inaccurate information when we ask for these attributes in forms. Here are a few reasons why people don’t answer BANT questions accurately:
- They simply don’t know as they are not the decision maker
- They are too early in the buying process
- They work with changing priorities
- They would rather avoid being contacted by Sales
2. Assuming self-entered information is always accurate
A prospect will often decide to provide inaccurate information even when they do know the correct details. For instance, the phone field is often particularly unreliable as prospects don’t want to be contacted before they are ready.
Steps to improve data capture with forms
Make sure your forms are concise.
Think carefully about the design of each element (headers and labels, checkboxes, dropdowns and buttons), as well as the questions and sequence.
Consider what you are asking in your forms. Someone registering for a webinar will be more likely to provide a valid email address in order to receive the access instructions.
It is also possible to augment your form data. You can use other data sources to fill in any gaps.
Another useful technique is progressive profiling, which asks a few new questions every time the prospect requests information from your website to build out a profile. One benefit of this is the ongoing elimination of data inaccuracies when you pre-populate forms with existing data that a prospect can accept, correct or update.
3. Ignoring rules for data quality scoring
Scoring rules which take points away for poor quality data entered by a prospect can be used in order to only focus on good prospects.
Examples of data quality scoring:
Decreasing a prospects score if the email comes from common email domains (gmail.com, yahoo.com, mac.com, hotmail.com).
If the first or last name do not contain any vowels.
If the inferred company name based on the IP address maps to an ISP and not a corporate domain.
You can also increase or decrease a prospect’s lead score based on the information you can infer from their IP about their geographic location (this is especially useful if you only operate in certain countries).
Remember that no matter how good your solution, leaving all the work to automated programming will lead to errors. The number of “Mickey Mouse” leads that make it through automated systems is much higher than you think! So it’s important to include some good old-fashioned “eyeballing” on a regular basis!
4. Assuming you sell to the CEO
It is often assumed that the more senior the job title the more likely they are to be the decision maker in the buying process and, as a result, are awarded the highest demographic score.
In reality the CEO rarely spends time looking at departmental-level products and services. Instead you need to understand who actually buys your product. If a director is the decision-maker, then it should be him or her that’s awarded the highest score.
There will also be other roles that influence the buying process. You must understand the group behaviour and adjust your scoring accordingly
5. Assuming a bigger company always better
Another assumption companies often make is that bigger companies, with a greater number of employees, deserve higher scores. You shouldn’t assume a few large companies are the best targets before you analyse the value potential from your mid-market prospects.
Another common error is grouping data into oversized bands. Giving high scores to every employee of a company with over 1,000 employees is too broad. There are simply too many companies of that size to allow accurate scoring.
6. Scoring active and latent behaviour the same
Behaviour is complex. You need to evaluate and value different kinds of behaviour and adjust your scores accordingly.
The most important distinction lies between active vs. latent buying behaviour. Active buying behaviour identifies the hot leads based on activities that demonstrate current interest. Latent buying behaviour involves lower engagement activity.
For example, imagine two similar prospects exhibiting different behaviours. One has looked at a case study, watched a demo and visited a pricing page – all in the last week.
The other prospect has made several repeat visits to the same page over a longer period of time with no implicit interest in your product or service.
While these prospects might both achieve the same overall engagement, the first one is active while the other is latent. When creating a lead scoring model it is important to ensure you are able to adjust your scoring to take these different buying behaviours into account.
You might identify specific active buying behaviours as critical and give them a higher score, and identify more latent buying behaviours as influencing and give them a lower score.
7. Score inflation
When you award points to a prospect for a specific behaviour (e.g. visiting a web page, attending a webinar), that score shouldn’t last forever.
The value of someone attending a webinar last week, for example, is more significant than one attended a year ago.
However, many lead scoring models don’t reflect degradation of score value over time, and the result is “score inflation,” where individual prospects may ring up lead scores of one thousand points or more, by which time the score has become meaningless.
One solution is to deduct points for no website activity for a period of time. This could be days, weeks or months depending on the nature of your business.
8. Setting and forgetting
Lead scoring methodologies, especially when new, are an imperfect science. You meet with Sales, build common definitions, and assign scores based on your understanding of the buying process. However, the market shifts, buyer behaviour changes, and your lead scoring system is no longer accurate. Hence, it is important that you don’t set it and forget it.
Instead, regularly survey sales to determine how well the lead scoring system reflects the reality of buyers. Run conversion reports to determine if “hot” leads truly convert better than other lead categories. And do it every 3-6 months depending on your sales cycle.
With the increasing popularity of marketing automation solutions lead scoring has become an integral part of a marketer’s arsenal.
However, Companies new to marketing automation often launch scoring too quickly in a rush to reap the benefits. This can lead to sales disenchantment, to the point where good leads are ignored and ROI suffers.
It is essential, therefore, that you spend enough time planning, optimising and testing your lead scoring method.