It’s not the data management – it’s you

When poor quality data, duplicated effort and siloed information impacts operational efficiencies, organisations might feel inclined to point a finger at data management. But it’s not data management that’s broken, it’s enterprise strategy.

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

Recently, international thought leaders speculated that data management might be ‘broken’ due to the growth in siloes of data and a lack of data standardisation. They pointed out that data siloes were still in place, much like they were back in the 1980s, and that the dream of standardised, centralised data providing a single view was as elusive as ever.

aaeaaqaaaaaaaaayaaaajgjimdq4ndu0lwi1ytmtndq2yi05yjdmltqyntlmmtkxnzrmmg

(Image not owned by KID)

In South Africa, we also see frustrated enterprise staff increasingly struggling to gain control of growing volumes of siloed, duplicated and non-standardised data. This is despite the fact that most organisations believe they have data management policies and solutions in place.

The truth of the matter is – data management is not what’s at fault. The problem lies in enterprise-wide data management strategies, or the lack thereof.

Data Management per se is never really broken.  Data management refers to a set of rules, policies, standards and governance for data throughout its life-cycles. While most organisations have these in place, they do not always have uniform data management standards in place throughout the organisation. Various operating units may have their own legacy models which they believe best meet their needs. In mergers and acquisitions, new companies may come aboard, each bringing with them their own tried and trusted data management policies. Each operating unit may be under pressure to deliver business results in a hurry, so they continue doing things in any way that has always worked for them.

The end result is that there is no standardised model for data management across the enterprise. Efforts are duplicated, productivity suffers and opportunities are lost.

In many cases, where questions are raised around the effectiveness of data management, one will find that it is not being applied at all. Unfortunately, many companies are not yet mature in terms of data management and will continue to experience issues, anomalies and politics in the absence of enterprise wide data management. But this will start to change in future.

In businesses across the world, but particularly in Africa, narrower profit margins and weaker currencies are forcing management to look at back end processes for improved efficiencies and cost cutting. Implementing more effective data management strategies is an excellent place for them to start.

Locally, some companies are now striving to develop enterprise-wide strategies to improve data quality and bring about more effective data management. Large enterprises are hiring teams and setting up competency centres to clean the data at enterprise level and move towards effective master data management for a single view of customer that is used in common way across various divisions.

Enterprise wide data management standards are not difficult to implement technology-wise. The difficult part is addressing the company politics that stands in the way and driving the change management needed to overcome people’s resistance to new ways of doing things. You may even find a resistance to improved data management efficiencies simply because manual processes and inefficient data management keeps more people in jobs – at least for the time being.

But there is no question that an enterprise wide standards for data management must be introduced to overcome siloes of data, siloes of competency, duplication of effort and sub-par efficiency. Local large enterprises, particularly banks and other financial services enterprises, are starting to follow the lead of international enterprises in moving to address this area of operational inefficiency. Typically, they find that the most effective way to overcome the data silo challenge is to slowly adapt their existing ways of working to align with new standards in a piecemeal fashion that adheres to the grand vision.

The success of enterprise wide data management strategies also rests a great deal on management: you need a strong mandate from enterprise level executives to secure the buy-in and compliance needed to achieve efficiencies and enable common practices. In the past, the C-suite business executives were not particularly strong in driving data management and standards – they were typically focused on business results, and nobody looked at operating costs as long as the service was delivered. However, now business is focusing more on operating and capital costs and discovering that data management efficiencies will translate into better revenues.

With enterprise wide standards for data management in place, the later consumption and application of that data becomes is highly dependent on the users’ requirements, intent and discipline to maintain the data standards.  Data items can be redefined, renamed or segmented in line with divisional needs and processes. But as long as the data is not manipulated out of context or in an unprotected manner, and as long as governance is not overlooked, the overall data quality and standards will not suffer.

Advertisements

How to tell if your organisation is strategically data driven

Striving to become a ‘data driven organisation’ is not enough, says Knowledge Integration Dynamics (KID).

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

There is a great deal of focus on the ‘data driven organisation’ now. But this focus misses the point – everyone is data driven to some degree. The question should be: are you strategically data driven?

Everyone – from the man in the street to the large enterprise – is driven by data. This data might emerge from weather reports, calendars, meeting schedules and commitments. A plethora of data drives decisions and processes all the time. But this does not mean data is being used effectively. In fact, in this scenario, the data drives us. Only when data is used strategically can we turn the situation around so that we drive the data, using it as a powerful tool to improve business.

868fcd_c372ccde15ba49c2a18e384fadbdad59

While there is always room for improvement and innovation in the gathering, management and application of data, many companies are already strategically data driven. These companies are relatively easy to identify, based on a number of traits they have in common:

  • Innovation and market disruption. Innovation can happen as a once-off ‘accident’, but a sustainable business that consistently innovates and disrupts is certainly basing its success on the strategic use of data. The sustainably innovative enterprise harnesses quality internal and external data and analytics to inform business decisions, improve products and customer experience, and maintain its competitive edge.
  • A culture of rationalisation. When a company is strategically data driven, it has achieved a clear understanding of where its resources can be put to the best use, where bottlenecks and duplication occurs and how best to improve efficiencies. A company with a culture of rationalisation, a focus on deduplication and a tendency to automate and reduce manual interventions clearly has good insight into its internal data.
  • A ‘Governance over all’ approach to business and operations. Only an organisation with quality data delivering effective insights into all spheres of the business is in a position to apply effective rules and governance over all systems, operations and processes.
  • Decisions are based on interrogating the right data with the right questions, using the right models.  A strategically data driven organisation does not tolerate poor quality data or interrogate this data in a haphazard fashion. The widespread use of quality data and analytics is evident in every aspect of the business, and is the basis of every decision within the organisation. The strategically data driven organisation also routinely tests new theories, asks the ‘what if’ questions, and constantly monitors and evaluates outcomes to add to the quality of its data and analytics.
  • ‘More than fair’ labour practices. Organisations with a good grasp of their data know what impact employee skills development and job satisfaction have on business processes and revenues. Strategically data driven organisations tend to leverage their skills investments with better working conditions, incentives, salaries, training and perks.
  • Strong leadership at all levels. Strong leadership is the base enabler for the evolution of all the other traits; and strong leaders are supported and measured by data. Data is the lifeblood of the organisation, supporting good leadership by allowing managers to improve efficiencies, ensure effective resource allocation, monitor and improve employee performance and measure their own performance as managers.

 

Any organisation not displaying these traits needs to be asking: “Are we taking an organised approach to data usage and information consumption in order to make our business more effective? Are we using our data to effectively look both inward and outward; finding areas for improvement within our operations and scope for innovation and business growth in our market?”

Companies still fail to protect data

Despite their having comprehensive information security and data protection policies in place, most South African businesses are still wide open to data theft and misuse, says KID.

By Mervyn Mooi, Director at the Knowledge Integration Dynamics Group

Numerous pieces of legislation, including the Protection of Personal Information (POPI) Act, and governance guidelines like King III, are very clear about how and why company information, and the information companies hold on partners and customers, should be protected. The penalties and risks involved in not protecting data are well known too. Why then, is data held within South African companies still inadequately protected?

In our experience, South African organisations have around 80% of the necessary policies and procedures in place to protect data. But the physical implementation of those policies and procedures is only at around 30%. Local organisations are not alone – a recent IDC study has found that two-thirds of enterprises internationally are failing to meet best practice standards for data control.

dreamstime_m_24243852-454x340

(Image not owned by KID)

The risks of data loss or misuse are present at every stage of data management – from gathering and transmission through to destruction of data. Governance and control are needed at every stage. A company might have its enterprise information systems secured, but if physical copies of data – like printed documents or memory sticks – are left lying around an office, or redundant PCs are sent for recycling without effective reformatting of the hard drives, sensitive data is still at risk. Many overlook the fact that confidential information can easily be stolen in physical form.

Many companies fail to manage information sharing by employees, partners and other businesses. For example, employees may unwittingly share sensitive data on social media: what may seem like a simple tweet about drafting merger documents with the other party might violate governance codes. Information shared with competitors in exploratory merger talks might be misused by the same competitors later.

We find that even larger enterprises with policies in place around moving data to memory sticks and mobile devices don’t clearly define what confidential information is, so employees tweet, post or otherwise share information without realizing they are compromising the company’s data protection policies. For example, an insurance firm might call a client and ask for the names of acquaintances who might also be interested in their product, but under the POPI Act, this is illegal. There are myriad ways in which sensitive information can be accessed and misused, with potentially devastating outcomes for the company that allows this to happen. In a significant breach, someone may lose their job, or there may be penalties or a court case as a result.

Most organisations are aware of the risks and may have invested heavily in drafting policies and procedures to mitigate them. But the best-laid governance policies cannot succeed without effective implementation. Physical implementation begins with analysing data risk: discovering, identifying, and classifying it, as well as analysing its risk based on value, location, protection, and proliferation.  Once the type and level of risk have been identified, data stewards need to take tactical and strategic steps to ensure data is safe.

These steps within the data lifecycle need to include:

  • Standards-based data definition and creation to also ensure that security and privacy rules are implemented from the out-set.
  • Strict provisioning of data security measures such as data masking, encryption/decryption and privacy controls to prevent unauthorised access to and disclosure of sensitive, private, and confidential information.
  • The organisation also needs to securely provision test and development data by automating data masking, data sub-setting and test data-generation capabilities.
  • Attention must also be given to data privacy and accountability by defining access based on privacy policies and laws – for instance,  who view personal, financial, health, or confidential data, and when.
  • Finally, archiving must be addressed: the organisation must ensure that it securely retires legacy applications, manages data growth, improves application performance, and maintains compliance with structured archiving.

 

Policies and awareness are not enough to address the vulnerabilities in data protection. The necessary guidelines, tools and education exist, but to succeed, governance has to move off paper and into action. It is important for companies to understand that policies and awareness programmes are not enough to ensure good governance. The impact of employee education is temporary – it must be refreshed regularly, and it must be enforced with systems and processes that entrench security within the database, at file level, server level, network level and in the cloud. This can be a huge task, but it is a necessary one when architecting for the future.

In context of the above, a big question to ponder is: Has your organisation mapped the rules, conditions, controls and standards (RCSSs) as translated from accords, legislation, regulation and policies, to your actual business / technical processes and data domains?

 

Big data follows the BI evolution curve

Big Data analysis in South Africa is early in its maturity levels, and has yet to evolve in much the same way as BI did 20 years ago, says Knowledge Integration Dynamics.

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

Big data analysis tools aren’t ‘magical insight machines’ spitting out answers to all business’s questions: as is the case with all business intelligence tools, there are lengthy and complex processes that must take place behind the scenes before actionable and relevant insights can be drawn from the vast and growing pool of structured and unstructured data in the world.

Depositphotos_45199151_l-2015

South African companies of all sizes have an appetite for big data analysis, but because the country’s big data analysis segment is relatively immature, they are still focused on their big data strategies and the complexity of actually getting the relevant data out of this massive pool of information. We find many enterprises currently looking at technologies and tools like Hadoop to help them collate and manage big data. There are still misconceptions around the tools and methodologies for effective big data analysis: companies are sometimes surprised to discover they are expecting too much, and that a great deal of ‘pre-work’, strategic planning and resourcing is necessary.

Much like the early days of BI, big data analysis started as a relatively unstructured, ad hoc discovery process, but once patterns are established, models are developed, and the process becomes a structured one.

And in the same way that BI tools depend on data quality and relationship linking, big data analysis depends on some form of qualifying prior to being used. The data needs to be profiled for flaws which need to be cleansed (quality), it must be put into relevancy (relationships) and it must be timeous in context of what is being searched or reported on.  Methods must be devised to qualify much of the unstructured data, as a big question remains around how trusted and accurate information from the internet will be.

The reporting and application model that uses this structured and unstructured data must be addressed, and the models must be tried and tested. In the world of sentiment analysis and trends forecasting based on ever-changing unstructured data, automated models are not always the answer. Effective big data analysis also demands human intervention from highly skilled data scientists who have both business and technical experience.  These skills are still scare in South Africa, but we are finding a growing number of large enterprises retaining small teams of skilled data scientists to develop models and analyse reports.

As local big data analysis matures, we will find enterprises looking to strategise on their approaches, the questions they want to answer, what software and hardware to leverage and how to integrate new toolsets with their existing infrastructure. Some will even opt to leverage their existing BI toolsets to address their big data analysis needs.  BI and big data are already converging, and we can expect to see more of this taking place in years to come.

SA companies are finally on the MDM and DQ bandwagon

Data integration and data quality management have become important factors for many South African businesses, says Johann van der Walt, MDM practice manager at Knowledge integration Dynamics (KID).

We have always maintained that solid data integration and data quality management are essential building blocks for master data management (MDM) and we’re finally seeing that customers believe this too. One of the primary drivers behind this is the desire for services oriented architecture (SOA) solutions for which MDM is a prerequisite to be effective. SOA relies on core data such as products, customers, suppliers, locations, and employees. Companies develop the capacity for lean manufacturing, supplier collaboration, e-commerce and business intelligence (BI) programmes. Master data also informs transactional systems and analytics systems so bad quality master data can significantly impact revenues and customer service as well as company strategies.

Taken in the context of a single piece of data MDM simply means ensuring one central record of a customer’s name, a product ID, or a street address, for example. But in the context of companies that employ in excess of 1 000 people, McKinsey found in 2013 that they have, on average, around 200 terabytes of data. Getting even small percentages of that data wrong can have wide ranging ramifications for operational and analytical systems, particularly as companies attempt to roll out customer loyalty programmes or new products, let alone develop new business strategies. It can also negatively impact business performance management and compliance reporting. In the operational context, transactional processing systems refer to the master data for order processing, for example.

get-on-the-big-data-bandwagon

(Image not owned by KID)

MDM is not metadata, which refers to technical details about the data. Nor is it data quality. However, MDM must have good quality data in order to function correctly. These are not new concerns. Both MDM and good quality data have existed for as long as there have been multiple data systems operating in companies. Today, though, they are exacerbated concerns because of the volume of data, the complexity of data, the most acute demand for compliance in the history of business, and the proliferation of business systems such as CRM, ERP and analytics. Add to that the fact that many companies use multiple instances of these systems across their various operating companies, divisions and business units, and can even extend to multiple geographies, across time zones with language variations. It unites to create a melting pot of potential error with far reaching consequences unless MDM is correctly implemented based on good quality data.

None of these concerns yet raise the issue of big data or the cloud. Without first ensuring MDM is properly and accurately implemented around the core systems companies don’t have a snowball’s hope in Hell of succeeding at any big data or cloud data initiatives. Big data adds successive layers of complexity depending on the scope of the data and variety of sources. Shuffling data into the cloud, too, introduces a complexity that the vast majority of businesses, especially those outside of the top 500, simply cannot cope with. With big data alone companies can expect to see an average growth of 60% of their stored data annually, according to IDC. That can be a frightening prospect for CIOs and their IT teams when they are still struggling to grapple with data feeding core systems.

While MDM is no longer a buzzword and data quality is an issue as old as data itself they are certainly crucial elements that South African companies are addressing today.

 

 

Big data propels exploratory data analysis to the fore Part II

Analysis constitutes inspecting, cleaning, transforming and modelling data in a bid to derive useful business conclusions. Exploration of the data is a prerequisite then, because it seeks to discover patterns and links between data sets. It is often a discovery that relies on visual methods. According to John Tukey’s proposition it encourages statisticians to explore data with a view to formulating hypotheses that may lead to new data collection and experiments. It is also nothing new. Tukey first proposed exploratory data analysis in a book of the same name in 1977.

images-27

(image not owned by KID)

What is new is the fact that data technicians are grappling with the issue in the modern computing sphere that incorporates big data, which is not only a definition of size, but also one of numerous types of data. Therein lies the clue to the requisite capabilities of modern exploratory data tools. It also hints at satisfying one of the benefits of data exploration: new data collection and experiments with observable business benefits.

 

To be effective then the tools need to be able to search large volumes of data as well as diverse data types. They must also be easy to use since in many cases it is businesspeople who must use them. Yet they must also offer technicians the ability to model and query accordingly. They need to rapidly present useful information to people yet simultaneously offer the ability to drill deeper in search of specific information as required. Above all, they must have the ability to integrate with numerous data stores and data repositories because higher, ubiquitous bandwidth necessitates interaction on an unprecedented scale.

 

 

Big data propels exploratory data analysis to the fore

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

Could the worlds of online search and corporate data exploration be marrying the simplicity of Internet search with enterprise analytics? It makes sense that businesspeople should be able to more easily sift through immense enterprise data stores – both internal and external – to find the information that they need to help them do business better.

While full automation of the task is some way off it is an end goal in the drive to discover new facts about business entities.

china-big-data-300x225-2

(Image not owned by Knowledge Integration Dynamics)

Businesses and the people who run them need to know specific information about customers, products, services, operational processes such as financials, human resources (HR), the supply chain, fleets and so on. The need to know about these things is driven by the desire to figure out how to reduce costs, increase profitability, reduce churn, increase customer acquisition, improve satisfaction, improve compliance, reduce risk – in fact, understand and facilitate every facet of a business because it has a direct or indirect impact on the bottom line.

In their efforts to achieve this organisations today own or have access to unprecedented volumes of data and also, crucially, data of many different types.

Finding the meaning in data is how you extract value from possessing or accessing it in the first place. But, as in mining for minerals, how do you extract the nuggets from the tonnes of earth that must be shifted in order to be strained and processed? And how do you cope with the plethora minerals that must be separated from the raw earth when each requires its own process? And how do you determine what new minerals may exist in the soil if you don’t know what you’re looking for in the first place?

Data exploration facilitates technologies that follow later in the data, information and knowledge process. BI solutions aid exploration because they allow people to explore the data, looking for commonalities, seeking out unknown trends, examining events and presenting those in formats that businesspeople can understand. The tools can only take people so far yet it is people who remain at the tip of the spear.

They must make informed decisions that will lead businesses through their daily operations, to their tactical targets and ultimately towards their strategic goals. Data exploration is a crucial step in that process because it precedes data analysis and enables far greater value from the original data.