The data mesh is a current technical and organizational concept to enable greater business proximity and more scaling for large organizations involved in data & analytics. Consistent implementation here proves revolutionary and requires change management.
Inhaltsverzeichnis
The Use of Data
The use of data is becoming increasingly important in all sectors, which is why corresponding organizations have now emerged in many places. What 20 years ago could only be used with the help of specialists is now used by many people in different professional functions. As a result, traditional and central organizations are confronted with numerous challenges. In a LinkedIn survey (without any claim to being representative), I asked my followers what they think is the biggest challenge at large organizations involved in data and analytics. According to the prevailing opinion, proximity to business is the greatest challenge, and proximity to business-critical issues is of importance. This outcome confirms my personal assessment and shows how important it is to apply findings from data and analytics in business-relevant processes.
The Idea of Data Mesh
The data mesh is based on the concept of managing a company's data no longer via a central data department, but via decentralized teams which produce and use the data. For this to work, four principles have been established:
Domain ownership – decentralized and distributed responsibility: Each decentralized department team has the right and responsibility to define, collect, store, maintain and publish its own data. This means that each team should be able to independently manage their data products and decide how these can be used by other teams within the enterprise.
Approach to data as a product: For Marty Cagan, products must be valuable, usable and feasible. This also applies to data products. Furthermore, these should be developed, tested, documented, deployed and maintained like software products.
Self-service data infrastructure as a platform: In order to support decentralized teams in their work with data content, a central team provides a platform for handling technical aspects such as data discovery, data integration, API management, data-quality control and monitoring.
Distributed, automated governance: Due to a fundamentally decentralized responsibility for data products, responsibility for governance must also be assumed decentrally. Various aspects are automated zo avoid reliance solely on organizational instructions. Security and data protection are the foremost candidates here.
The data mesh addresses precisely this proximity to business through decentralization and application-specific data products. In this context, there are not only citizen data analysts who carry out decentralized evaluations, but also IT-savvy business analysts for decentralized data engineering. In addition, a single source of truth viewed as a holy grail was previously sought via a central, integrated data model as well as central governance. Both these items have now been dispensed with. Does this mean we will lose a single source of truth? Are we aware of this? Are we ready for this? In my estimation, we are now striving for a higher degree of truth. A business-oriented truth which can be used directly for business processes.
Actuality of Data Mesh
I currently see the data mesh as being particularly relevant for large organizations. A central model is very well suited for smaller companies at the moment.
Critical skills are bundled in a single team.
At a small organization, a central team can guarantee proximity to business.
A small, central team does not have the coordination problems/overhead of large central teams.
As technical support improves, the data mesh will also become more relevant for smaller organizations. It's just a matter of time.
In March, I was a moderator at a Google event on data meshes. With two interesting and experienced speakers - Dr. Anna Hannemann and Peter Kühni - also present here, 30 clients discussed their expectations and challenges in their daily use of data meshes. Clearly, many expect to be able to better manage growth and gain further relevance with the help of data meshes. Among the challenges here is the data mesh's complexity which demands consistency across different teams as well as more communication and technical support. The first tools for technical support are already available on the market, but not yet established and still complicated in nature.
Data Mesh Methodology by b.telligent
That's why b.telligent has assumed Zhamak Dehghani's approach which has already made it possible to gain implementation experience with various clients. What has emerged here is a methodology which involves the proven change-management technique and is structured as follows: "Awareness -> Desire -> Knowledge -> Ability -> Reinforcement". It is used to enable sustainable change. This approach offers various support options ranging from initial workshops with the management team to technical assistance in implementation. This enables us to provide customized responses to the different needs of clients.
Let’s Unlock the Full Potential of Your Data – Together!
Looking to become more data-driven, optimize processes, or leverage cutting-edge technologies? Our blog provides valuable insights – but the best way to tackle your specific challenges is through a direct conversation.
Let’s talk – our experts are just one click away!
Want To Learn More? Contact Us!
Your contact person
Caspar von Stülpnagel
Managing Director
Who is b.telligent?
Do you want to replace the IoT core with a multi-cloud solution and utilise the benefits of other IoT services from Azure or Amazon Web Services? Then get in touch with us and we will support you in the implementation with our expertise and the b.telligent partner network.
Exasol is a leading manufacturer of analytical database systems. Its core product is a high-performance, in-memory, parallel processing software specifically designed for the rapid analysis of data. It normally processes SQL statements sequentially in an SQL script. But how can you execute several statements simultaneously? Using the simple script contained in this blog post, we show you how.
Many companies with SAP source systems are familiar with this challenge: They want to integrate their data into an Azure data lake in order to process them there with data from other source systems and applications for reporting and advanced analytics. The new SAP notice on use of the SAP ODP framework has also raised questions among b.telligent's customers. This blog post presents three good approaches to data integration (into Microsoft's Azure cloud) which we recommend at b.telligent and which are supported by SAP.
First of all, let us summarize the customers' requirements. In most cases, enterprises want to integrate their SAP data into a data lake in order to process them further in big-data scenarios and for advanced analytics (usually also in combination with data from other source systems).
As part of their current modernization and digitization initiatives, many companies are deciding to move their data warehouse (DWH) or data platform to the cloud. This article discusses from a technical/organizational perspective which aspects areof particularly important for this and which strategies help to minimize anyrisks. Migration should not be seen as a purely technical exercise. "Soft" factors and business use-cases have a much higher impact.