<iframe src="//www.googletagmanager.com/ns.html?id=GTM-5W4TNK" height="0" width="0" style="display:none;visibility:hidden">

Here’s why most enterprises are only capturing a fraction of data’s potential

Data potential blog.jpgThe surge in data has paved the way for big business promises, but how well are we taking advantage of the opportunities on offer from this digital deluge?

There are countless articles, white papers, presentations and events preaching about how important it is to glean insights from your ever-growing repositories of data to make better business decisions, predict performance and truly unlock your potential. 

When it comes to putting this into action, however, research shows there’s still plenty of work needed to truly tap into enterprise data’s potential across various industries.

Education

According to a report exploring the implementation of learning analytics (analysis of student behaviours and experiences) at British higher education institutions, only 1.9% have fully implemented it, while almost 50% are yet to implement anything.

Utilities

The power of data captured through IoT still has a total potential economic impact of $3.9 trillion to $11.1 trillion a year by 2025. For areas such as operations management, equipment maintenance, health and safety in the utilities sector, the potential economic impact of IoT is $200 – $900 billion.

Government & healthcare

Only 10-20% of the potential of Big Data has been tapped in the government and healthcare sectors.

Finance

Massive data integration in retail banking could have a potential global impact of up to $260 billion.

What's holding back our ambitions?

A major factor in adapting to data-driven business is building robust data architectures and agile data processes which can effectively house, understand and enable you to leverage your data. To put it simply, traditional Data Warehouses aren’t built for massive amounts of data and analytics. There is a need for effective enterprise Data Warehousing, information management and governance that will underpin effective analytics.

I recently caught up with Dan Linstedt, Inventor of Data Vault 2.0. He offered up a unique analogy that both amused me and perfectly captured the way a lot of business are approaching this problem:

“Imagine you have a VW Beetle and one day you decide to start providing a transportation service, picking people up from bus stops. You want to pick up 50 people, but how do you fit them into a small car?”

“You’re trying to force an architecture built to transport 4 people into doing something it’s not made for. However, one approach might be to chop it in half, extend it, lift the wheelbase and add suspension to support more weight – essentially, you turn it into a bus. This is re-engineering the whole solution and this is what businesses are facing with Big Data”.

“Companies have Data Warehouses that have been running along fine for a few years but now they’re inundated with massive volumes of data. They think the solution is to re-engineer everything to cope, but it’s is a massive and costly undertaking.”

This is where Data Vault 2.0 comes in

Data Vault 2.0 is a data architecture that has been built at the multi-petabyte solution level, so the engineering, design and processes of scale-free architecture are already embedded in it.   

When you build a Data Vault solution, you don’t have to re-engineer in order to cope with new business demands. It’s a foundation that can be used to scale your business when you want to, without re-engineering your IT solution.

Data Vault 2.0 offers impressive speed when loading or analysing large data sets. This means that IT can now keep pace with the agile demands of the business, delivering new reports in days rather than months, while enhancing quality, and trust, with full audibility back to the source data.

However, what is even more impressive is that advanced ETL automation and code generation mean that a Data Vault can reduce TCO by over 50% and generate a full ROI within just one year! 

A recent deployment of Data vault 2.0 shows just how powerful the solution can be:  

How pepper group uses data vault 2.0

Pepper Group is a global financial services organisation with expertise in the residential and commercial property sectors as well as in consumer, auto and equipment finance. This breadth presents a unique set of data management challenges, including increasingly complex data silos.

The Need

Pepper Group needed an infrastructure capable of leveraging these separate systems to provide the flexibility and scalability to be more agile and responsive to the business. It wanted to ensure the introduction of new products or services could be achieved quickly and efficiently.

The Solution

Pepper Group engaged with Certus Solutions to explore the potential of Data Vault 2.0 for the business. At its core, what Data Vault 2.0 offered Pepper was an infinitely scalable architecture that was quicker to implement, load data and create reports, and also ensured the data was trustworthy, correct and supported business processes.

The Benefits

By adopting the Data Vault 2.0 methodology, Pepper is now focused on rationalising three Data Warehouses into a single Data Vault. Introducing new products is significantly improved as the data process is streamlined with only the Hubs and Links needing updating. A new mortgage product was introduced in just 4 weeks, minimising the scope and cost of re-engineering for change.

Download the full Case Study to find out more about how Pepper Group is delivering projects faster while cutting costs using Data Vault 2.0.

Download Case Study

 

Julien Redmond
Author: Julien Redmond

Helping customers understand how they can improve the quality and value of Business Intelligence and Business Analytics is Julien’s passion. Coming from extensive experience in Enterprise Information Management Strategy makes Julien a professional in delivering information governance across all business integration layers.

28 September 2017