Is your Data an Asset or Liability?

Source: https://www.oreilly.com/library/view/creating-a-data-driven/9781491916902/ch01.html#idp4111616 

Being a data-driven company has become a bit of a management catch-phrase.  Ironically, many companies throw it around without providing the data to back up their claims.  So before we get into assessing your data, let’s review some basic qualifications: 

1. There is an established data pipeline from collection to regular analysis. 

Your company should have a well-established pattern for collecting data and then assembling it into a shareable and usable format.  This means that you have invested into a platform that joins the data from disparate sources into the tools for reporting, monitoring and alerting.  Reports are descriptive statistics, just like tracking your weight on a daily basis – it tells you the current state and if you should be concerned. Analysis takes the next step by comparing related data sets (e.g., diet or exercise) into a story that delivers insights.   

2. Actions are regularly taken from data analysis. 

To be driven by data requires a response from all of that data stimuli.  This is different for each company and their data, but follows some similar patterns: 

  • Continuous Improvement: Using models for their business processes, companies explore means to decrease costs or cycle times through targeting causes of variation or eliminating non-value added steps.  Manufacturing has used this practice under a number of different labels from Kaizen to Lean Six Sigma to Value-stream mapping. 
  • Continuous Experimentation: Companies incorporate experiments into their regular processes to detect and capitalize on their audience preferences or emerging trends.  Marketing firms constantly test website checkout flows and advertising media.   
  • Prediction-Base decisions: Many companies are adopting predictive models to inform their forecasts and to make buying decisions.  Once commonplace only in the finance industry to trade stocks, Machine Learning as a Service firms have made this commonly available in the past couple of years. 

So, how is your data looking? 

If the business maxim, “you can’t manage what you don’t measure” is true, then this is a fair and scary question.  Why? 

  1. Gartner points to data quality as the primary reason that 40% of business initiatives fail to achieve their target benefits; Data quality affects labor productivity by as much of 20%. (https://www.data.com/export/sites/data/common/assets/pdf/DS_Gartner.pdf). 

  1. Hidden data cost is estimated to be a $3.1 trillion problem in the US alone (https://hbr.org/2016/09/bad-data-costs-the-u-s-3-trillion-per-year). 

  1. Data growth was 5000% over the past decade. (https://www.forbes.com/sites/gilpress/2021/12/30/54-predictions-about-the-state-of-data-in-2021/)  

There isn’t a one-size-fits all program for measuring data quality. However, a great starting point is to focus on key business processes and several quality dimensions. 

Many organizations develop ratio measures around these dimensions.  Let’s take a simple example about having the correct current mailing address.  A measure could be percent of validated addresses.  Data profiling, such as comparing city, state and postal codes, can assist in determining completeness and a handful of accuracy concerns.  Using services like the National Change of Address can proactively improve accuracy, relevance and timeliness factors.  Deliverability reports from mailing campaigns provide a reactive and authoritative status check.  But notice what we did there – we have one (hopefully) source of constituent address information, but needed another 2-3 datasets to validate and build trust in what we had originally captured.  If this seems costly, then consider the opportunity costs associated with getting it wrong.  I had one client engage a major direct mail vendor that made a mistake on their year-end campaign.  Major donors received the new donor mailing (and in some cases duplicates of the same mailing).  In another case, using 10-year old demographic data in predictive models skewed constituent scoring to the point that their campaign missed revenue goals for every segment except for their new donors.   

In both situations, there were reputational and real costs associated with poor data quality. As organizations embrace automated marketing and other AI enabled processes, data quality becomes critically important.  If an organization puts garbage data into their models, then they can expect a similar output.  Sophisticated garbage is still garbage. 

What’s Next? 

As with most disciplines, this is not a problem that will be solved overnight.  It will take time to develop measures and data governance processes that make sense for your organization.  So we advise to start small and focus your attention on the items that matter. 

1. Take a temperature check 

There are two great places to start.   

  1. Ask your employees what data that they trust and do not trust.  This sounds overly simple, but a simple question about how someone builds a proposal or determines project success reveals how data is sourced and interpreted.  This is where you may find that last month’s proposals are considered a better source of information than your internal pricing guides.  Project key performance indicators often reveal work-arounds due to data timeliness or in some cases different calculations around margins.  While you may not have a data quality measure at this point – this step will help you focus on the major pain points. 

  1. Focus on Core Definitions. Organizations need to define what they consider authoritative and truthful.  A common product catalog is necessary for efficient analysis.  Everyone needs to look at the same financial report describing projects to discuss profitability.  The underlying contents and calculations may be subject to debate, but it creates a foundation to work from rather than perpetuating different versions of reality floating through the organization. 

2. Turn on the Easy Free Stuff 

World-class Customer Relationship Management tools have built-in tools that help with data quality.  Salesforce has a pretty robust and customized duplication management solution built into their platform.  They are free to use and only need to be configured, not coded. 

 

  1. Activate / Customize Duplicate Rules. With Salesforce, you can establish and customize matching rules that determine if a duplicate record exists, then define the rules that govern what a user is allowed to when creating or editing the record. Typically, name and some address fields are configured for people and organizations. However, you can create matching and duplication rules on most of the objects in your instance. 

  1. Turn on the Potential Duplicate component.  There are auto-of-the box components that will highlight that a potential duplicate exists or that a user is attempting to create a potential duplicate record. 

  1. Automate Duplicate Management – Once you get a feel for how the duplicate management works for your organization, you can set up rules that can block (vs warn) users from creating duplicates and establish guidelines for how to merge duplicate accounts, leads and contacts. They can be managed in the moment or through global reports and action. 

If completeness is an issue for your organization, creating custom formulas for your key objects is an inexpensive option for quantifying the gaps within your data.  These formulas could be as simple as calculating the percentage of expected fields that have values.  Reports around these formulas are instructive on constructing your initial quality measure and reinforcing training for your staff. 

3. Focus on Master Data Management & Integrations 

Once you decide where the authoritative data resides, then it’s time to focus on how to keep everything up to date and synchronized.  

Data integration plays a critical role in multi-cloud or hybrid software ecosystems.  Consider if your sales team is using Salesforce, and finance, human resources or other supply chain solutions are on other platforms. This is not uncommon, but keeping customer, employee and order information consistent and secure can present some challenges.  There are point-to-point solutions for simple cases, but complex and event-driven cases may require Enterprise Application Integrations (EAI). EAI Solutions like Mulesoft provide a configurable means to keep all of these systems connected. 

There are numerous 3rd party data services that integrate with Salesforce to keep your contact and account addresses up to date, including offering a National Change of Address screening service for your direct mail bulk rates. There are also a number of data enrichment services that will provide both demographic and other socio-economic factors. Candoris has regularly integrated wealth-screening enrichment services to help support major donor programs. These services typically improve the accuracy, timeliness and relevance for your end users. 

Just because it is in Salesforce doesn’t mean that your data is internally consistent.  As organizations install packages to support their programs, we have observed that these packages may not always sync with your primary objects.  In one case, we found gender populated in three different contact fields: different programs, processes and packages thwarted a consistent view of a client.  Bulk synchronization or standardizing on NPSP design patterns can resolve those issues, but it does take some digging to discover the discrepancies.  This is so critical to our client’s success that Candoris has built internal tools to help simplify data migration or tune-ups. 

Get Started with Data Discovery 

 

The journey to becoming a truly data-driven company is ongoing and something all organizations have to strategically approach, which is where Candoris can be an asset. Getting the right data, in the right place, at the right time does not have to come with major upfront costs. Reach out to find out how we can assist your data needs with a data discovery call. 

 

About the author

Sr Solutions Consultant

Trevor loves connecting the dots between strategy and execution. After a decade of volunteering, he left a public-sector career to focus on fusing together technical, operational, and marketing goals for nonprofit organizations. He also served as a field-based CIO for a global nonprofit and set product direction for nonprofit marketing and data solutions before coming to Candoris. Trevor holds a PhD from Capella University, where his research focused on nonprofit technology strategy, an MBA from Drexel University, and a BA from Messiah University, where he serves as Adjunct Faculty.