Imagine that you’re in the market for an insurance policy. You start with a basic search online. Find a couple of trustworthy websites and give your contact information so that they can reach out to you. You haven’t made a decision yet, you are looking to learn more about what’s on offer. Needless to say all the insurance competing to serve you have similar offerings. In most cases, the first salesperson to call you gets your attention. But it could be that that person isn’t fluent in your language. Right after this call you get another call from a different salesperson and they not only speak your language, but live two blocks away from you. It is likely that the second salesperson gets your attention.
Now look at this from the Insurance company’s point of view. Getting the right salesperson to call the lead as soon as possible is of utmost importance in order to sell. This is true for any sales team.
Institutions in the BFSI segment get thousands of leads every minute across different distribution channels and geographies. In this context, managers can’t manually allocate leads to their team members. Automated Lead Allocation has become the norm in the industry. But a static rule-based allocation system is no longer the best solution available. It isn’t optimized for lead conversion.
With Vymo, Sales teams can maximize Lead Conversions from the get go. We use a combination of manual, automatic and ML-based methods to allocate leads for our users. Machine Learning (ML) powered allocation is specifically optimized for lead conversion as compared to a rule-based approach.
ML Lead Allocation with Vymo
The ML-based Lead Allocation consists of two parts.
- Allocation Service Rule-set
- ML Model.
The Allocation Rule-set is the order-wise criteria that need to be met while considering a possible user for a lead. This is a necessary stage to shortlist the available users for the lead. This Rule-set can be based on the following user or lead attributes,
- Lead Details viz. Source of the Lead, Product Type, Potential Ticket size
- Current Location of Lead and User
- User Details viz. Role, Department, Zone, Channel, Tenure, Online/Field Agent
- User Performance Metrics viz. Conversion ratio, Number of leads engaged, Avg Time to first call
Rule-set can be made by combining rules that consider one or more of the above parameters.
Once a set of users are shortlisted as potential matches for the lead, the allocation can happen in either of two ways. The first is the simple Round-Robin method, which is an equal distribution. The second method involves the ML Model. Customers can configure whether any given lead type has equal distribution or ML-based Allocation.
In case the Lead has ML Allocation configured,
The set of Users eligible to be matched with the lead (based on the Rule-set) are fed into the ML Model.
The ML Model uses lead’s and users’ data fields to compute the lead conversion propensity score for each potential match.
The Match with the highest propensity to convert will be used by the Allocation system.
How is Vymo’s ML Model deployed?
- The leads’ and users’ historical data is fed into the ML Model. Each entry will be one lead-user match that was allocated in the past and the result of that allocation i.e conversion or churn.
- As The ML Model learns from each record, it adjusts its algorithm to find a pattern that defines the lead-user match with the propensity for conversion.
- We use the Cross Validation technique to make the ML Model more robust.
- After creating and training a model, new and unseen data is used to test its performance. We compare the Model’s prediction with actual results to measure the effectiveness. We use metrics such as, Accuracy, F1 Score and Area-under-the-Curve to quantify the effectiveness.
- If the Model doesn’t meet our standards of effectiveness. We go back to step 1 to retrain the Model.
- Once the Model is effective, It is deployed. Customers now have the option to choose the ML Model based Allocation system for their leads.
We use metrics like F1 score, accuracy and AUC to quantify the efficacy of our ML Model:
Over time, ML Models tend to become less effective when implemented on newer data sets, to tackle this we periodically retrain the Model.
Our objective from deploying this model is to increase the median propensity score for lead conversion (over a set of leads). Since an increase in score reflects a higher chance of lead conversion.
Why use Vymo’s ML-based Lead Allocation?
Vymo Knows you sales reps better.
Vymo captures user behavior and sales engagements at a finer granularity. We know how and when your sales reps work. By being an integral part of our users’ daily workflow, we are in a unique position to track their performance across metrics that are otherwise untraceable.
We bring this knowledge to our Lead Allocation system.
ML-Model will do the thinking for you.
Vymo’s ML-based Lead Allocation works with the intention of allocating leads to the person who is most likely to convert it. While Lead Allocation in itself is an operational challenge for many technology providers, we take it up a notch in providing value to the customer.
Such an allocation method, by way of its implementation, can keep learning. It can be refined through iterating it with newer data. So that it adjusts its algorithm to the context of your specific business use-case. With each iteration of learning it can recalibrate the weight given to each parameter and improve accuracy.
Quickly adapt to changing user behavior.
Since the ML-model considers User’s performance metrics in allocating leads, Users that show a performance dip will be allocated lesser leads. If their conversion rates rise back up, they’ll be given more leads.
If a user has sufficient leads to work on, the system doesn’t allocate more leads so as to not inundate their workload and thereby cause them a performance dip.
Our ML-based allocation positively impacts your revenue from the very moment it is deployed. With our recent implementation, a customer of ours gained significant results within just two months.
ML-based allocation is the only way to find the best possible middle-ground between quick and
accurate allocation. This approach can be deployed in cases where we have enough data to train our ML model. But once we set things in motion it is a journey of continuous improvement and optimization that only keeps giving.