Rotate

Please rotate your device.

Our website uses cookies to ensure you get the best experience while you’re here.

Swirl

The Challenge of Data Accessibility

By: Stephen Galloway

Person using macbook laptop

According to a recent study by Forrester, less than 0.5% of all data is ever analyzed and used. Let that statistic sink in for a moment. Based on this insight, every organization should be asking the following questions right now:

  • What data do we have?
  • How do we get it into the hands of decision makers?
  • Why aren’t we more effectively using our data?

Today’s IT data investment dollars are often funneled into the creation of new advanced analytics capabilities: big data integration, machine learning, and data science solutions. What if I told you those investments were of little value if the data output doesn’t reach its intended audience? Or that inaccessible data sources could point to false positives? Here’s what each of these capabilities has in common: none will work well (or at all) unless the challenge of data accessibility shares the spotlight with these complementary high visibility initiatives.

We are currently in the midst of a paradigm shift in regard to data accessibility. The old paradigm saw a data provider (usually IT) spending weeks or months creating accessible datasets for downstream technical and business users, with a focus on accumulation and storage. The new paradigm not only represents a shift in the user base, but also a dramatic shift in the overall delivery strategy. There is now a wider range of data user profiles; from highly technical data scientists and analysts to mildly technical business analysts and managers. The adoption of cloud-based and plug and play tools have removed technical hurdles and delivered enhanced speed, increased agility, more robust exploration, and more fruitful discovery for a wide range of business users. This shift is fueled by a convergence of technology innovation and ever-evolving business needs:

  • Business decisions are now made at a (comparatively) breakneck pace, and downstream data consumers can’t afford to wait weeks or months for IT to deliver data solutions
  • New, recent, sources of data whose sheer volume create significant barriers to accessibility and usability
  • The emergence of the ‘citizen data scientist’, whose technical skill set supports hands-on data work including the construction of their own queries, models, and blended data sets using the latest data analytics tools

Recognizing that data accessibility is an opportunity within your organization is a great first step. Where do you go from there? First, take stock of your current situation. Are there opportunities to enable self-service data capabilities? Self-service data provisioning capabilities serve as the catalyst for the adoption of ‘citizen data scientist’ practices, and the enablement of advanced data science activities. The core ‘citizen data scientist’ practices – exploration, discovery, data prep, visualization, and advanced analytics – which will continue to support significant increases in customer, market and product insights are in turn supported by streamlining access to both traditional and emerging data sets. If not supported by self-service data access, these emerging capabilities will, at best, be inefficient and, at worst, ineffective.

So, how can your organization begin the transformative process that enables self-service data access? Below are key questions and actions that can be taken immediately, with minimal investment and maximum return.

  • Q: What data do we have?
  • A: Inventory your data.

Evaluate how your data assets are organized today, filtering each through the lens of key business use cases with a focus on how end users could be better served was the data available in a self-service platform. Re-organize/consolidate/integrate the data, based on the output of the completed evaluation, in such a way that enables the self-service capability. In short – simplify access to both traditional and emerging data assets. Bigger is not always better.

  • Q: How do we get it into the hands of decision makers?
  • A: Assess current data governance access and usage controls

Identify areas of sensitivity and ensure that the proper controls are in place to restrict access and use as required. Leverage API driven solutions that provide real-time data capabilities, and data sandboxes that create a ‘safe’ space for data exploration without compromising the security of the core data asset.

  • Q: Why aren’t we more effectively using our data?
  • A: Analyze the state of your metadata

Focus on business metadata, as it serves as the key translational layer between the technical data solution architecture and the intended business use case. No one will leverage the self-service capability if they don’t understand how the data itself creates value in the first place, i.e. why should I use this data, to begin with?

A final question – what happens if your organization chooses not to invest in improving data accessibility and the enablement of self-service data provisioning? The Forrester study provides some additional insight: Richard Joyce, Senior Analyst at Forrester, hypothesized that “just a 10% increase in data accessibility will result in more than $65 million additional net income for a typical Fortune 1000 company.” The cost of poor data accessibility is clear; either embrace this opportunity and enable the capabilities required to uncover insights and actions driving future business success, or ignore this opportunity and risk falling behind in an economy that is fueled by disruption and categorized by unpredictability.

Stephen Galloway

Consultant

More posts from this author