Answers in no time

Internet search engines with instant query responses may have misled enterprises into believing all analytical queries should deliver split-second answers.

With the advent of big data analytics hype and the rapid convenience of Internet searches, enterprises might be forgiven for expecting to have all answers to all questions at their fingertips in near real-time.

dataflow

 

Unfortunately, getting trusted answers to complex questions is a lot more complicated and time-consuming than simply typing a search query. Behind the scenes on any Internet search, a great deal of preparation has already been done in order to serve up the appropriate answers.

Google, for instance, dedicates vast amounts of high-end resources and all of its time to preparing the data necessary to answer a search query instantly. But, even Google cannot answer broad questions or make forward-looking predictions.

In cases where the data is known and trusted, the data has been prepared and rules have been applied, and the search parameters are limited, such as with a property Web site, almost instant answers are possible, but this is not true business intelligence (BI) or analytics.

Behind the scenes

Within the enterprise, matters become a lot more complicated. When the end-user seeks an answer to a broad query – such as when a marketing firm wants to assess social media to find an affinity for a certain range of products over a six-month period – a great deal of ‘churn’ must take place in the background to deliver answers. This is not a split-second process, and it may deliver only general trend insights rather than trusted, quality data that can serve as the basis for strategic decisions.

Most business users are not BI experts.

 

When end-users wish to do a query and are given the power to process their own BI/analytics, lengthy churn mke place. Every time a query, report or instance of data access is converted into useful BI/analytical information for end-consumers, there is a whole lot of preparation work to be done along the way: ie, identify data sources> access> verify> filter> pre-process> standardise> look up> match> merge> de-dup> integrate> apply rules> transform> pre-process> format> present> distribute/channel.

Because most queries have to traverse, link and process millions of rows of data and possibly trillions of words from within the data sources, this background churn could take hours, days or even longer.

A recent TWDI study found organisations are dissatisfied with the time it takes for the chain of processes involved for BI, analytics and data warehousing to deliver valuable data and insights to business users. The organisations attributed this, in part, to ill-defined project objectives and scope, a lack of skilled personnel, data quality problems, slow development or inability to access all relevant data.

The problem is most business users are not BI experts and do not all have analytical minds, so the ‘discover and report’ method may be iterative (therefore slow), and in many cases, the outputs/results are not of the quality expected. The results may also be inaccurate as data quality rules may not have been applied, and data linking may not be correct, as it would be in a typical data warehouse where data has been qualified and pre-defined/derived.

In a traditional situation, with a structured data warehouse where all the preparation is done in one place, and once only, and then shared many times, supported by quality data and predefined rules, it may be possible to get sub-second answers.

But, often, even in this scenario, sub-second insights are not achieved, since time to insight also depends on properly designed data warehouses, server power and network bandwidth.

Users tend to confuse search and discover on flat raw data that’s already there, with information and insight generation at the next level. In more complex BI/analytics, each time a query is run, all the preparation work has to be done from the beginning and the necessary churn can take a significant amount of time.

Therefore, demanding faster BI ‘time to value’ and expecting answers in sub-seconds could prove to be a costly mistake. While it is possible to gain some form of output in sub-seconds, these outputs will likely not be qualified, trusted insights that can deliver real strategic value to the enterprise.

Advertisements

Old business issues drive a spate of data modernisation programmes

 

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

The continued evolution of all things is obviously also felt in the data warehousing and business intelligence fields and it is apparent that many organisations are currently on a modernisation track.

But why now? Behind it all is exponential growth and accumulation of data and businesses are actively seeking to derive value, in the form of information and insights, from the data. They need this for marketing, sales and performance measurement purposes and to help them face other business challenges. All the business key performance indicators or buzzwords are there: wallet share, market growth, churn, return on investment (ROI), margin, survival, customer segments, competition, productivity, speed, agility, efficiency and more. Those are business factors and issues that require stringent management for organisational success.

Take a look at Amazon’s recommended lists and you’ll see how evident and crucial these indicators are. Or peek into a local bank’s, retailer’s or other financial institution’s rewards programmes.

AAEAAQAAAAAAAA1bAAAAJGE4ZTEwYWNjLTE4YjctNGU2YS1hOWFhLTA2NWU0OTYxOWNlNA

(Image not owned by KID)

Social media has captured the media limelight in terms of new data being gathered, explored and exploited. But it’s not the only one. Mobility, cloud and other forms of big data, such as embedded devices and The Internet of Things, collectively offer a smorgasbord of potential that many companies are mining for gold while others are entrenching such value in their IT and marketing sleuths if they’re to remain in the game. Monetisation of the right data, information and functionality, at the right time, is paramount.

The tech vendors have been hard at work to crack the market and give their customers what they need to get the job done. One of the first things they did was come out with new concepts of working with data under using the old technologies. They introduced tactical strategies like centre of excellence, enterprise resource planning, application and information integration, sand-pitting and more. They also realised the need to bring the techies out of the IT cold room and put them in front of business-people so that they could get the reports the business needed to be competitive, agile, efficient and all the other buzzwords. That had limited success.

In the meantime the vendors were also developing modern and state-of-the-art technologies that people can use. The old process of having techies write reports that would be fed to business-people on a monthly basis was not efficient, not agile, not competitive and generally not at all what they needed. What they needed were tools that could hook into any source or system, that could be accessed and massaged by the business-people themselves and that could be relied upon for off-the-shelf integration and reporting. Besides that, big data was proving to be complex and required a new and useable strategy that would be scalable and affordable to both the organisation and the man on the street.

Hadoop promised to help that along. Hadoop is a framework based on open source technology that can give other benefits such as better return on investment by using clusters of low cost servers. And it can chew through petabytes of information quickly. The key is integrating Hadoop into mainstream analytics applications.

Columnar databases make clever use of the properties of the underlying storage technologies that enable compression economies and make searching through the data quicker and more efficient. There’s a lot of techie mumbo jumbo that makes it work but suffice to say that searching information puts the highest overhead on systems and networks so it’s a natural area to address first.

NoSQL is also known as Not only SQL because it provides storage and retrieval modelled not only on tables, common to relational databases, but also by column, document, key values, graphs, lists, URLs and more. Its designs are simpler, horizontal scaling is better – which improves the ability to add low cost systems to improve performance – and it offers better control over availability.

Data appliances are just as the name suggests: plug and play, data warehousing in a box, systems, software and the whole caboodle. Just pop it in and: “Presto,” you’ve got a ton more capacity and capability. These technologies employ larger, massively parallel, and faster in-memory processing techniques.

Those technologies, and there are others like them, solve the original business issues mentioned upfront. They deliver the speed of analytics that companies need today, they give companies the means to gather, store and view the data differently that can lead to new insights, they can grow or scale as the company’s data demands change, their techies and business-people alike are more productive using the new tools, and they bring a whole raft of potential ROI benefits. ROI, let’s face it, is becoming a bigger issue in environments where demands are always growing, never diminishing, and where financial directors are increasingly furrow browed with an accumulation of nervous tics.

Large businesses aren’t about to rip out their existing investments – there’s the implicit ROI again – but will rather evolve what they have. The way organisations are working to change reporting and analytics, though, will have an impact on the skills that they require to sustain their environments. Technical and business tasks are being merged and that’s why there’s growing demand for so-called data scientists.

Data scientists are supposed to be the do-it-all guys, right from data sourcing and discovery to franchising insightful and sentiment-based intelligence. They are unlike traditional information analysts and data stewards or report writers, who had distinct roles and responsibilities in the data and information domains.

 

 

 

 

 

The wielder, not the axe, propel plunder aplenty

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

Business intelligence is a fairly hot topic today – good news for me and my ilk – but that doesn’t mean everything about it is new and exciting. The rise and rise of BI has seen a maturation of the technologies, derived from a sweeping round of acquisitions and consolidations in the industry just a few years ago, that have created something of a standardisation of tools.

business-dashboard-types

(image not owned by KID)

We have dashboards and scorecards, data warehouses and all the old Scandinavian-sounding LAPs: ROLAP, MOLAP, OLAP and possibly a Ragnar Lothbrok or two. And, like the Vikings knew, without some means to differentiate, everyone in the industry becomes a me-too, which means that’s what their customers ultimately get. And that makes it very hard to win battles.

 

Building new frameworks around tools to achieve some sense of differentiation achieves just that: only a sense of differentiation. In fact, even when it comes to measurements, most measures, indicators and references in BI today are calculated in a common manner across businesses. They typically use financial measures, such as monthly revenues, costs, interest and so on. The real difference, however, comes in preparing the data and the rules that are applied to the function.

 

A basic example that illustrates the point: let’s say the Vikings want to invade England and make off with some loot. Before they can embark on their journey of conquest they need to ascertain a few facts. Do they have enough men to defeat the forces in England? Do they have enough ships to get them there? Do they know how to navigate the ocean? Are their ships capable of safely crossing? Can they carry enough stores to see them through the campaign or will they need to raid settlements for food when they arrive? Would those settlements be available to them? How much booty are they likely to capture? Can they carry it all home? Will it be enough to warrant the cost of the expedition?

 

The simple answer was that the first time they set sail they had absolutely no idea because they had no data. It was massively risky of the type that most organisations aim to avoid these days. So before they could even begin to analyse the pros and cons they had to get at the raw data itself. And that’s the same issue that most organisations have today. They need the raw data but they don’t need it, in the Viking context, from travellers and mystics, spirits and whispers carried on the wind. It must be good quality data derived from reliable sources and a good geographic cross-section. And in preparing their facts, checking they are correct, that they come from reliable sources, that there has been case of broken telephone, that businesses will truly make a difference. Information is king in war because it allows a much smaller force to figure out where to maximise its impact upon a potentially much larger enemy. The same is true in business today.

 

Before the Vikings could begin to loot and pillage they had to know where they could put ashore quickly to effect a surprise raid with overwhelming odds in their favour. In business you could say that you need to know the basic facts before you drill down for the nuggets that await.

 

The first Viking raids grew to become larger as the information the Vikings had about England grew. Pretty soon they had banded their tribes or groups together, shared their knowledge and were working toward a common goal: getting rich by looting England. In business, too, divisions, units or operating companies may individually gain knowledge that it makes sense to share with the rest to work toward the most sought-after plunder: the overall business strategy.

 

Because the tools and technologies supply common functionality and businesses or implementers can put them together in fairly standard approaches as they choose, the real differentiator for BI is the data itself and how the data is prepared – what rules are applied to it before it enters the BI systems. Preparation is king.

 

These rules ultimately differentiate information based on wind-carried whispers or reliable reports from spies abroad. Which would you prefer with your feet on the deck?

 

Contact

 

Knowledge Integration Dynamics, Mervyn Mooi, (011) 462-1277, mervyn.mooi@kid.co.za

Thought Bubble, Jeanné Swart, 082-539-6835, jeanne@thoughtbubble.co.za

 

 

Data (Information) Governance: a safeguard in the digital economy

Global interest in Data Governance is growing, as organisations around the world embark on Digital Transformation and Big Data management to become more efficient and competitive. But while data is being used in myriad new ways, the rules for effective governance must prevail.

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

176703924

(image not owned by KID)

The sheer volume and variety of data coming into play in the increasingly digital enterprise presents massive opportunities for organisations to analyse this data/information and apply the insights derived therefrom to achieve business growth and realise efficiencies. Digital transformation has made data management central to business operations and created a plethora of new data sources and challenges. New technology is enabling data management and analysis to be more widely applied; supporting organisations that are increasingly viewing data as a strategic business asset that could be utilised for gaining a competitive advantage.

To stay ahead, organisations have to be agile and quick in this regard, which has prompted some industry experts to take the view that data governance needs a new approach; with data discovery carried out first, before data governance rules are decided on and applied in an agile, scalable and iterative way.

While approaching data management, analysis and associated data governance in an iterative way using smaller packets of data makes sense, however, the rules that must be applied must still comply with legislation and best practice; and as a prerequisite these rules should therefore be formalised before any data project or data discovery is undertaken. Governance rules must be consistent and support the overall governance framework of the organisation throughout the data lifecycles of each data asset regardless of where and when the data is generated, processed, consumed and retired.

In an increasingly connected world, data is shared and analysed across multiple platforms all the time – by both organisations and individuals. Most of that data is being governed in some way, and where it is not, there is risk. Governed data is secure, applied correctly and of quality (reliable), and – crucially – it helps mitigate both legal and operational risk. Poor quality data alone is a significant cause for concern among global CEOs, with a recent Forbes Insights and KPMG study finding that 45% of CEOs say their customer insight is hindered by a lack of quality data and 56% saying they have concerns about the quality of data they base their strategic decisions on; while Gartner reports that the average financial impact of poor quality data could amount to around $9.7 million annually. On top of this, the potential costs of unsecured data or non-compliance could be significant. Fines, lawsuits, reputational damage and the loss of potential business from highly regulated business partners and customers are among the risks faced by the organisation failing to implement effective data governance frameworks, policies and processes.

Ungoverned data results in poor business decisions and exposes the organisation and its customers to risk. Internationally, data governance is taking top priority as organisations prepare for new legislation such as the new EU GDPR, formally known as the General Data Protection Regulation legislation, which is set to come into effect next year, and organisations such as Data Governance Australia launch a new draft Code of Practice on benchmarks for the responsible collection, use, management and disclosure of data. South Africa, surprisingly, is on the forefront here with its POPI regulations and wide implementations of other guideline such as KING III and Basel.  New Chief Data Officer (CDO) roles are being introduced around the world.

Now more than ever before, every organisation has to have up to date data governance frameworks in place and more importantly, have the rules articulated or mapped into their processes and data assets. They must look from the bottom up, to ensure that the rules on the floor align with the compliance rules and regulations from the top. These rules and conditions must be formally mapped to the actual physical rules and technical conditions in place throughout the organisation. By doing this, the organisation can illustrate that its data governance framework is real and articulated into its operations, across physical business and technical processes, methodologies, access controls and data domains of the organisation, ICT included.  This mapping process should ideally begin with a data governance maturity assessment upfront. Alongside this, the organisation should deploy dedicated data governance resources for sustained stewardship.

Mapping the rules and conditions, and the due configuration of the relevant toolsets to enforce data governance, can be a complex and lengthy process.  But they are necessary in order to entrench data governance throughout the organisation. Formalised data governance mapping proves to the world where and how the organisation has implemented data governance, demonstrating that policies are entrenched throughout its processes and so supporting audit and reducing compliance risk and operational risk.

To support agility and speed of delivery iterations for data management and analyses initiatives and instances, data governance can be “sliced” specifically for the work at hand and also applied in iterative fashion, organically covering all data assets over time.

 

 

Data in transit raises security risks

Keeping data secure can be as daunting as herding cats, unless data governance is approached strategically.

There is no doubt data proliferation is presenting a challenge to organisations. IDC predicts the data created and shared every year will reach 180 zettabytes in 2025; and we can expect much of that data to be in transit a lot of the time.

images-16

This means it will not be securely locked down in data centres, but travelling across layers throughout enterprises, across the globe and in and out of the cloud. This proliferation of data across multiple layers is raising concern among CIOs and businesses worldwide, particularly in light of new legislation coming into play, such as the General Data Protection Regulations for Europe, due to be implemented next year.

Where traditionally, data resided safely within enterprises, it is now in motion almost constantly. Even data on-premises is transported between business units and among branches within the enterprise, presenting the risk of security chaos. The bulk of the core data is always in movement – these are enabling pieces of data moving within the local domain. At every stage and every endpoint, there is a risk of accidental or deliberate leaks.

When data is copied and transmitted via e-mail, porting or some other mechanism from one location to another, this data is not always encrypted or digitally signed. To enable this, companies require the classification of data assets against the due security measures each would require, and this is not evident in most companies today.

At the next layer, commonly described as the ‘fog’ just below the cloud, data and information travelling between applications and devices off-premises are also at risk. A great deal of data is shared in peer-to-peer networks, connected appliances or by connected cars. If this data is not secured, it too could end up in the wrong hands.

In an ever-growing data ecosystem, enterprise systems should be architected from the ground up with compliance in mind.

Most companies have data security policies and measures in place, but these usually only apply on-premises. Many lack effective measures when the data physically leaves the premises on employee laptops, mobile devices and memory sticks. These devices are then used in unsecured WiFi areas, or they are stolen or lost, putting company IP at risk. Data on mobile devices must be protected using locks, passwords, tracking micro-dots, and encryption and decryption tools.

Finally, at the cloud layer, data stored, managed and processed in the cloud is at risk unless caution is exercised in selecting cloud service providers and network security protocols, and applying effective cloud data governance.

While large enterprises are becoming well versed in ensuring data governance and compliance in the cloud, small and mid-sized enterprises (SMEs) are becoming increasingly vulnerable to risk in the IOT/cloud era.

For many SMEs, the cloud is the only option in the face of capex constraints, and due diligence might be overlooked in the quest for convenience. Many SMEs would, for example, sign up for a free online accounting package without considering who will have access to their client information, and how secure that data is.

Locking down data that now exists across multiple layers across vast geographical areas and is constantly in transit demands several measures. Data must be protected at source or ‘at the bone’. In this way, even if all tiers of security should be breached, the ultimate security is in place on the data elements at cell level itself throughout its lifecycles. Effective encryption, identity management and point in time controls are also important for ensuring data is accessible only when and where it should be available, and only to those authorised to access it.

Role and policy-based access controls must be implemented throughout the data lifecycle, and organisations must have the ability to implement these down to field and data element level.

In an ever-growing data ecosystem, enterprise systems should be architected from the ground up with compliance in mind, with data quality and security as the two key pinnacles of compliance. In addition, compliance education and awareness must be an ongoing priority.

All stakeholders, from application developers through to data stewards, analysts and business users, must be continually trained to put effective data governance at the heart of the business, if they are to maintain control over the fast-expanding digital data universe.

It’s not the data management – it’s you

When poor quality data, duplicated effort and siloed information impacts operational efficiencies, organisations might feel inclined to point a finger at data management. But it’s not data management that’s broken, it’s enterprise strategy.

By Mervyn Mooi, director at Knowledge Integration Dynamics (KID)

Recently, international thought leaders speculated that data management might be ‘broken’ due to the growth in siloes of data and a lack of data standardisation. They pointed out that data siloes were still in place, much like they were back in the 1980s, and that the dream of standardised, centralised data providing a single view was as elusive as ever.

aaeaaqaaaaaaaaayaaaajgjimdq4ndu0lwi1ytmtndq2yi05yjdmltqyntlmmtkxnzrmmg

(Image not owned by KID)

In South Africa, we also see frustrated enterprise staff increasingly struggling to gain control of growing volumes of siloed, duplicated and non-standardised data. This is despite the fact that most organisations believe they have data management policies and solutions in place.

The truth of the matter is – data management is not what’s at fault. The problem lies in enterprise-wide data management strategies, or the lack thereof.

Data Management per se is never really broken.  Data management refers to a set of rules, policies, standards and governance for data throughout its life-cycles. While most organisations have these in place, they do not always have uniform data management standards in place throughout the organisation. Various operating units may have their own legacy models which they believe best meet their needs. In mergers and acquisitions, new companies may come aboard, each bringing with them their own tried and trusted data management policies. Each operating unit may be under pressure to deliver business results in a hurry, so they continue doing things in any way that has always worked for them.

The end result is that there is no standardised model for data management across the enterprise. Efforts are duplicated, productivity suffers and opportunities are lost.

In many cases, where questions are raised around the effectiveness of data management, one will find that it is not being applied at all. Unfortunately, many companies are not yet mature in terms of data management and will continue to experience issues, anomalies and politics in the absence of enterprise wide data management. But this will start to change in future.

In businesses across the world, but particularly in Africa, narrower profit margins and weaker currencies are forcing management to look at back end processes for improved efficiencies and cost cutting. Implementing more effective data management strategies is an excellent place for them to start.

Locally, some companies are now striving to develop enterprise-wide strategies to improve data quality and bring about more effective data management. Large enterprises are hiring teams and setting up competency centres to clean the data at enterprise level and move towards effective master data management for a single view of customer that is used in common way across various divisions.

Enterprise wide data management standards are not difficult to implement technology-wise. The difficult part is addressing the company politics that stands in the way and driving the change management needed to overcome people’s resistance to new ways of doing things. You may even find a resistance to improved data management efficiencies simply because manual processes and inefficient data management keeps more people in jobs – at least for the time being.

But there is no question that an enterprise wide standards for data management must be introduced to overcome siloes of data, siloes of competency, duplication of effort and sub-par efficiency. Local large enterprises, particularly banks and other financial services enterprises, are starting to follow the lead of international enterprises in moving to address this area of operational inefficiency. Typically, they find that the most effective way to overcome the data silo challenge is to slowly adapt their existing ways of working to align with new standards in a piecemeal fashion that adheres to the grand vision.

The success of enterprise wide data management strategies also rests a great deal on management: you need a strong mandate from enterprise level executives to secure the buy-in and compliance needed to achieve efficiencies and enable common practices. In the past, the C-suite business executives were not particularly strong in driving data management and standards – they were typically focused on business results, and nobody looked at operating costs as long as the service was delivered. However, now business is focusing more on operating and capital costs and discovering that data management efficiencies will translate into better revenues.

With enterprise wide standards for data management in place, the later consumption and application of that data becomes is highly dependent on the users’ requirements, intent and discipline to maintain the data standards.  Data items can be redefined, renamed or segmented in line with divisional needs and processes. But as long as the data is not manipulated out of context or in an unprotected manner, and as long as governance is not overlooked, the overall data quality and standards will not suffer.