For a long time, I managed to avoid spending any significant amount of energy on manual lead scoring. The work seemed tedious and the applications murky. My plan was to keep an eye out for AI-enabled predictive lead scoring solutions and let the robots do it.
The thing is, while it’s true that there are machine learning-based predictive lead scoring technologies out there – in fact, this is something we’re actively researching and looking to incorporate more directly into our services – these platforms don’t fit within everyone’s budget.
Eventually, I stopped resisting and have implemented a number of lead scoring frameworks now. It took some trial and error, but I can attest that it’s not the waste of time I worried it might be.
If high-end predictive lead scoring solutions are out of reach for your business, having a good manual lead scoring system in place now is better than waiting and having nothing in the meantime. If you’re considering implementing lead scoring for the first time or you’re looking to improve on your existing framework, here are some best practices to start you on the right path, some common mistakes to avoid and a few key lessons I learned along the way.
(Note: these are the main takeaways I’ve identified based on experience; others may have different best practices or mistakes they’ve encountered. Also, it should be noted that as a majority of our clients are on HubSpot, that accounts for most of my lead scoring experience.)
Start with crystal clear needs and objectives.
It’s a given that lead scoring should help sales and marketing teams better qualify and nurture leads by assigning points based on notable attributes and behaviors. To avoid putting time into a lead scoring system that has no real-world value, you have to take this a step further. The first and most important step of lead scoring is to get clarity on exactly what sales needs out of lead scoring in order to more effectively prioritize and close leads.
If lead scoring is done in a marketing vacuum, it’ll likely end up being something that looks nice to the marketing team and gets completely ignored by sales. Trust me, I’m as tired of hearing the phrase “sales and marketing alignment” as anyone, but there has to be agreement on what job lead scoring needs to accomplish.
If there’s no documented process for how marketing is handing off leads to sales, start there. Then, run through the scenarios of different types of leads the team is working to understand where the biggest needs are and where lead scoring can fit in. As a visual person, I like using a diagram or flowchart for this – but do whatever works best for you.
Don’t overcomplicate your scoring or your segments.
Once you start looking at the different types of triggers that might go into your lead scoring framework, it can be tempting to slice and dice leads into a bunch of super granular segments.
Don’t do this.
Remember that ultimately the lead scoring should be helping sales easily understand where a lead is in their buyer’s journey and when it makes sense to reach out. If there are too many outputs from the lead scores, it’s creating more work for sales to keep track and respond appropriately.
Also, no matter how thoughtfully you build the scoring system, there are always going to be exceptions to the rule (i.e. leads whose scores don’t actually line up with their intent because of reasons outside of your control) – which means the more possible segments, the more exceptions there will be. (TLDR: more segments, more mess.)
HubSpot recommends segmenting leads into three buckets:
- Leads in need of nurturing
- Engaged leads
- Lead score MQLs
In most cases, I’ve found this to be a good approach, too. This provides a manageable, easy-to-understand framework and limits the potential for complications and confusion.
To the same goal of simplicity, I also find it’s best to keep your score ranges small. Consider a scoring system closer to 0-10 than 0-100. It’s fairly intuitive to understand the difference between a 6 and an 8, but how should sales think about the difference between a 67 versus an 80? And is that different than comparing a 60 against an 87?
Account for decay over time.
Obviously, a lead who was all over your pricing page yesterday should be scored higher than one you haven’t seen in several months.
To address this, you can build in negative scoring based on the recency of last activity. To use a simple example, depending on the typical sales cycle, you might apply negative scores after one month, three months, six months and a year to account for time lapsed. (Keep in mind these negative scores will compound as more time lapses).
If you’re math inclined, you might try out something more sophisticated to account for decay, such as MadKudo’s nuclear physics inspired equation:
Y = ∑ .f(X).e(-t(X)/λ) +
Y is still the representation of conversion
X are the events
f are the features functions extracted from X
t(X) is the number of days since the last occurrence of X
are the predictive coefficients
λ are the “half-lives” of the events in days
Good luck with that!
Find a fool-proof way to make sure active sales leads aren’t being nurtured by marketing.
Often, one goal of a lead scoring system is to identify leads that are not sales-ready (e.g. “leads in need of nurturing” from the section above) who can be put on some sort of automated track like a drip campaign until they become sales-ready.
One potential complication here is that a lead who is being actively worked by sales might fall into this lead score range and land in automated marketing land, which sucks for everyone. The lead is annoyed to be getting irrelevant emails which means sales is pissed and marketing takes the blame.
There are a couple ways to avoid this:
- Create a “lead status” or similar contact property in your CRM. Then, make sure sales marks all active leads to “active” and set rules for all automated nurturing campaigns to exclude contacts with “lead status” marked as “active.” (On paper this solution should work best, but it does rely on the sales team to keep this updated.)
- Determine which behaviors and/or attributes trigger sales outreach and apply high enough scores to these to prevent the contacts from falling back into the “needs engagement” zone. For example, if completing a demo form makes a lead sales-ready, and your max score for a lead score MQL is 30, add 100 points for the completion of this form. (This sort of goes against the idea of keeping the range small, but you can effectively ignore these high scores since sales will already be working the corresponding leads.)
Test your lead scoring more than you planned to, and then test some more.
Once you’ve got the first iteration of your lead scoring framework in place, give yourself a pat on the back – then get ready to get back to work.
The best way to tighten up your system is to get marketing and sales together to test the system (and test it again and again). Software like HubSpot and Marketo will let you drop in contacts so you can see how they’re being scored and compare this with any other attributes and behaviors to see if theory lines up with the more complete picture.
As mentioned earlier, there will always be exceptions to the rules, and your goal shouldn’t be to account for 100% of the possibilities because that’s not realistic. But by dedicating time to test contacts, compare notes with the sales team and make adjustments, you’ll catch the major issues and end up with a system that will help your business close more leads.
Soon enough, predictive lead scoring based on machine learning may be within reach for small businesses on tight budgets – and when that happens, we can all spend less time on manual lead scoring. Until then, putting together a well-planned manual system help your sales team close more leads and use their time more efficiently.
Rather have a team of marketing automation specialists do this heavy lifting for you? I know just the team.