Insight From Data Everywhere Driving Hybrid Cloud Strategy
Leading organizations are shifting their focus to enable cloud capabilities across their on-premises application landscape. The benefits it brings to application developers and owners alike are raising the conversation from that of a cost management basis to one that is value driven. It is behind the many transformation initiatives we see sweeping all sectors of business, government, and academia as the agility and efficiency of cloud opens new possibilities. Up to now, the focus has been on developers and applications as that is where the perceived immediate benefits lie. These “edge-to-cloud” capabilities, as HPE refers to them in its edge-to-cloud framework, enables interactions with customers, partners, and suppliers. However, there is another element heavily influenced by this transition that has largely completed its transformation in stealth. That is the transformation to cloud-native data services.
Moving beyond infrastructure
When organizations first started looking to the cloud to solve infrastructure problems, IT architects looked to mirror their environments. They built block storage in the cloud for our database LUNs, file systems to mirror unstructured home shares and data lakes using the same technology used on-premises, so that existing models and methods would work just as well in the cloud. Sadly, this missed the point of the transformation and instead made it simply a transition. As developers started gaining access to the APIs of the new services and were able to move the needle from just providing storage in the cloud to data services in the cloud, the transformation took on new meaning.
Cloud native benefits
Data and applications are not treated equally by cloud native environments. The benefits to data of a cloud-native environment are not the same benefits that applications receive. They are better! They have a compound effect of increasing the effectiveness and impact of the benefits applications receive too.
Cloud-native data is not limited to the provisioning of data storage, but more widely involves applying policy, governance, analysis, and processing to data, depending on its location and its context. Cloud-native data is context-aware data. It is generated by our interactions with customers, with our partners and with our environments. This can happen in a data center or a cloud where the customer’s interaction with the application occurs, or it can happen at the edge, where the computing power is moved to where the interaction naturally occur. The benefit of the latter is one of immediacy, a feeling of responsiveness and of closeness to the decision.
- 1 Data in an edge-to-cloud world
- 2 Cloud-native data
- 3 The journey to cloud native
- 4 Summary
- 4.1 About Glyn Bowden
- 4.1.1 Glyn Bowden is a CTO for HPE Pointnext Services, AI & Data Science Practice. His technical experience has spanned many industries from global finance, national security and high technology. With a background in high performance compute, cloud native computing and emerging technologies such as blockchain and machine learning, Glyn’s goal is to make high technology solutions accessible to all.
- 4.1 About Glyn Bowden
Data in an edge-to-cloud world
Data is central to an edge-to-cloud operating model. It is gathered at the edge, where transactions occur to provide insights into those activities, helping organizations to determine how to adjust operations according to the intelligence derived from it. This diagram shows the relationship between edge and cloud, with data being the information currency moving between organizational operations and the transactional experience at the edge.
So, how does data become cloud-native? The answer is in the way we perceive data. Traditionally it has always been viewed as having a single purpose. We collect data for a very specific reason, and we create a silo around the components needed to fulfill that purpose. This is efficient and reasonably risk-averse until it becomes clear that the data contains far more value than that for which it is being used. Before cloud, the data would simply be replicated and a new silo for the new use case created. However, that created problems; ensuring the data remained synchronized, increasing infrastructure costs, and difficulties in governing and management.
The next step in the evolution was the data lake; giant pools of data that any application could take advantage of. However, once more this required moving the data from its native environment where it was created, into this vast pool. Of course, governance, security, policy all became much harder in this model and organizations feared that the value they were investing in to unlock was being used by others without such investment. This fear of losing control or allowing others access to something to which they may not have sufficiently contributed led us to data lake houses, which were essentially small islands of data distributed between the organization and externally. Organizations could then take advantage of this loosely coupled architecture of data to allow elements to exist at the edge, within the data center, even within other organizations. It was nearly cloud-native.
The next step is crucial, and that is understanding, in this new distributed data world, where your data is, valuing it, finding new data, and then carrying out value exchange and collaborative efforts around that data. Thus, Dataspaces were born. Dataspaces are truly cloud-native data, where data can be discovered, valued, exchanged, and create a marketplace of data and data artifacts that drive insights, previously unattainable.
The journey to cloud native
As organizations embark on their journeys to realize cloud-native data benefits, they will need to focus on several data capabilities that provide the most value. These capabilities addressed in a structured manner can enable a much more streamlined path to value. HPE has worked with a multitude of customers on their journeys, and we use an intellectual property framework that facilitates us to help our customers get to cloud-native data nirvana faster. Some of the key components to address are:
Data Strategy and Governance defines the organization, strategy, and stewardship to maximize data value and ensure compliance.
Data Architecture provides the data principles, physical characteristics, and formats to enable cloud native practices.
Data Lifecycle Management safeguards the retention and resiliency required to manage this most critical asset.
Data Ingestion & Processing focuses on the incorporation and quality of the data creation and migration capabilities.
Data Knowledge is required to provide and communicate the holistic data opportunities.
Data Consumption must provide an ease of intelligence, insights and predictions.
As clients look to jumpstart their cloud native data capabilities across their ecosystem, HPE has enabled them to holistically mature their operating models to enable insights at scale and pace. As shared here, one of the greatest benefits to operating in a cloud like manner involves the data domain. A cloud native manner exposes data through APIs meaning the data itself becomes an entity the programmer can easily find, manipulate, and create value from, much as they did with early cloud services. This openness, although wrapped in governance for protection, allows us to understand data provenance leading to transparency for AI models, reports, decision making and more. Trusted AI starts with trusted data and that means lineage and process history. HPE Edge-to-Cloud Adoption Framework guides customers in establishing proper governance, strategy, and lifecycle management, while at the same time maturing their cloud-native architecture, ingestion, and consumption capabilities.
For further information please visit www.hpe.com/greenlake/cloud-adoption-framework.
This article is one in a series that address the eight capability domains of the HPE Edge-to-Cloud Adoption Framework. The other seven articles can be found here:
The Crucial Role of Application Management in a Cloud Operating Model
Does Your Company Have a Complete Innovation Framework?
Five Focus Areas to Transform Your IT Organization
DevOps and Digital Transformation: Now and Future
An Operating Model to Support Engagement at the Digital Edge
The Role of Security Transformation
3 Essential Elements of Strategy & Governance to Accelerate a Multi-Cloud Journey
About Glyn Bowden
Glyn Bowden is a CTO for HPE Pointnext Services, AI & Data Science Practice. His technical experience has spanned many industries from global finance, national security and high technology. With a background in high performance compute, cloud native computing and emerging technologies such as blockchain and machine learning, Glyn’s goal is to make high technology solutions accessible to all.
Copyright © 2021 IDG Communications, Inc.