Skip to main content
Data CleansingMachine Learning

Risks of Poor Data Quality in Salesforce

By August 18, 2020June 12th, 2023No Comments

If we were to survey executives, sales professionals, and other business leaders about the time and budget drainers that exist in their respective company, duplicate data will probably not be very high on the list. However, allowing this issue to fester will almost certainly snowball into major and costly businesses problem.  Let’s take a look at exactly how much poor data quality costing your business, in terms of both time and money.

The Origin of Duplicates

Salesforce is a very flexible CRM that collects data from various sources. As a result, duplicate records can be introduced during manual entry, migration from another CRM, integrations with third-party systems, and many other sources. Once there, duplicates become a drain on a marketing budget, hinder the operations of your sales team and prevent you from getting a comprehensive view of your customer to guide future interactions. Salesforce does offer some help in identifying the duplicates, but very often it does not suit the exact needs of each individual company.

Here are some of the shortcomings noted by customers:

  • Not suitable for large data volumes – Salesforce itself has acknowledged this issue and it is discussed in greater depth in the Trailblazer Community.
  • Unable to perform a cross-object duplicate search – Very often new leads already exist inside Salesforce but as contacts. Salesforce, by itself, will not be able to detect such duplicates since it can only search for them in one place.
  • You will need to buy pricier editions – Salesforce can perform duplicate checks, but this is something that is offered in the Unlimited edition (i.e. the most expensive one).
  • Dubious matching algorithms – The duplicates that currently exist in your systems will vary on a case-by-case basis. This means that you need to catch all of the duplicates and avoid false positives at the same time. Therefore, you will need a very precise matching algorithm beyond the one that Salesforce provides.

Now that we know the source of duplicate data and why Salesforce itself is unable to deal with them effectively, let’s take a look at how much all of this is costing you.

The 1-10-100 Rule

As a rule, you should remember that it costs you $1 to verify that a record is correct. It will cost $10 to correct any mistakes and $100 if nothing is done about the error. Based on our existing customers, we can assume that many organizations have millions of records. In terms of the practical implications, consider how much money it costs to send the same promotional material to a lead or prospect. In fact, you could be repeating this process indefinitely until the duplicate is found. Beyond the pure costs, you also run the risk of annoying people to the point where they no longer wish to open your messages.

According to data from Just Associates, the duplicate rate in a database can be as high as 20-30%. All of this costs companies $600+ million annually. The takeaway from the 1-10-100 rule is that it’s a lot less costly to invest in software that can catch the duplicates before they snowball into big problems.

Less ROI from Your Salesforce Investment

Companies invest in Salesforce because it does a very good job of managing customer data and gives companies a complete view of the customer. However, this is contingent on the fact that the data is complete and accurate. 96% of companies report that they are having trouble building a comprehensive single view of their customer

The biggest obstacle is not the data collection, but rather dealing with sheer volume of duplicate records. The data is there, but there is no way to visualize it all in the same place.

Given the number of records companies have inside Salesforce, it is simply not possible to comb through each one manually. Customers often turn to 3rd party tools to detect duplicates, but most require you to develop static rules which simply cannot cover all the scenarios.  This is where machine learning algorithms can be a huge asset.  Well-trained algorithms can detect duplicates without all the hassle, while constantly learning from the user to continuously improve the detection process.

30% of Time is Lost Inside the CRM

A recent report shows that sales professionals waste 30% of their time sorting through bad data inside the CRM. Duplicate data is one of the most harmful forms of bad data since it creates issues such as ownership conflicts, distorted reporting, and scattered information.  For instance, if salespeople are spending much of their time tracking accurate data, they will have a hard time meeting their sales quotas which impacts your bottom line. And the issue can and will spiral out of control since your marketing campaigns and sales activities are generating new duplicates.

Salesforce can help somewhat by activating duplicate rules and additional customized settings, but it does not go far enough. A lot of the duplicates are not 100% carbon copies of one another and will not be detected by Salesforce. There are sophisticated tools on the market that can help you catch duplicates of all shapes and sizes and can offer you a side-by-side view of the records to see if in fact they can be merged.

Additional Wasted Time

One of the biggest benefits from Salesforce is a single, comprehensive view of customers. Sales teams rely on the information in CRM to perform their daily operations and close deals. When employees start to see data issues, they become wary of relying on the information and will start confirming everything, resulting in wasted time and investment. Since they also interact with customers, marketing and support teams will have the same issue. Duplicates can wreak havoc across the enterprise if the problem is not brought under control.

Each piece of information about a customer is a piece of a puzzle that your team is trying to put together. It allows them to see the needs and pain points leading to increased conversion rates. A lot of times, you have all of the pieces that you need to put the whole thing together and you don’t even know it since the information is scattered across duplicate records. By getting rid of the duplicates, you are empowering teams throughout the organization to perform at their very best.

A Negative Perception of your Brand

If we put ourselves in the shoes of a prospective customer…we see that they had some interactions with our brand. This could be something like a newsletter signup, reading a blog post, or a more direct communication such as a phone call. After all these steps, they are almost ready to buy and have only a few questions before signing the contract. A call with another sales representative is made only to find that they are not familiar with any communication you had with a previous sales rep. Not only will the customer be put off by such interaction, but they will lose trust in your company and this can be difficult, if not impossible, to overcome.

Such a situation can be especially damaging if your company operates on the B2B market which have extended sales cycles and many stakeholders. This scenario requires sales teams to coordinate with each other and get other departments involved, as well. All the information needs to be located in one customer record so that everyone is on the same page and one person can simply pick up where the previous left off.

Start Getting Rid of your Duplicates Today

DataGroomr is a fast and easy tool you can use to identify and merge duplicate records. Unlike other solutions out there, it is powered by machine learning that even detects the duplicates that are hard to spot. The best part is that there is no configuration or setup required.  It will present you with a side-by-side comparison of duplicate records so you can determine if in fact, they are duplicates.  The system will constantly learn and improve, so the more you use it the better it gets.

Steven Pogrebivsky

Steve Pogrebivsky has founded multiple successful startups and is an expert in data and content management systems with over 25 years of experience. Previously, he co-founded and was the CEO of MetaVis Technologies, which built tools for Microsoft Office 365, Salesforce and other cloud-based information systems. MetaVis was acquired by Metalogix in 2015. Before MetaVis, Steve founded several other technology companies, including Stelex Corporation, which provided compliance and technical solutions to FDA-regulated organizations. Steve holds a BS in Computer/Electrical Engineering and an MBA in Information Systems from Drexel University.