Performance optimisations are not set in stone: optimisations that have worked extremely well at a company with a certain system architecture and a certain volume of data will not necessarily work equally well elsewhere. In other words, individual solutions are required. Fundamentally, however, the key point is always to find the right balance between main memory and database capacity, and between implementation complexity and serviceability. The focus is always on processing time.
Table of Contents
As we've learned in the previous article, using cache tables in SAP BW transformations is still useful even with an in-memory database such as HANA. Cache tables in this context are internal ABAP tables that are created and populated during data transformation processes (DTP). The tables are useful in particular when implementation logic needs to be applied in ABAP routines, and the transformations therefore cannot be processed in the SAP HANA database.
Often, all you need to do is duplicate or delete the data lines. This is best done with block operations. In other cases, lookups are required:
In lookups, information from one or more other database tables is added to each, hopefully relevant data record. The correct lookup structure in these cases is critical to the BW transformation performance.
Using an example that I have recently repeatedly come across in practice, this series will explore possible implementation techniques for the development of high-performance lookups.
Read our six-part series on "HIGH-PERFORMANCE LOOKUPS IN BW TRANSFORMATIONS IN PRACTICE", which will explain in detail the most common practical questions. The table below gives a summary of the areas covered:
Do you want to replace the IoT core with a multi-cloud solution and utilise the benefits of other IoT services from Azure or Amazon Web Services? Then get in touch with us and we will support you in the implementation with our expertise and the b.telligent partner network.
Exasol is a leading manufacturer of analytical database systems. Its core product is a high-performance, in-memory, parallel processing software specifically designed for the rapid analysis of data. It normally processes SQL statements sequentially in an SQL script. But how can you execute several statements simultaneously? Using the simple script contained in this blog post, we show you how.
Many companies with SAP source systems are familiar with this challenge: They want to integrate their data into an Azure data lake in order to process them there with data from other source systems and applications for reporting and advanced analytics. The new SAP notice on use of the SAP ODP framework has also raised questions among b.telligent's customers. This blog post presents three good approaches to data integration (into Microsoft's Azure cloud) which we recommend at b.telligent and which are supported by SAP.
First of all, let us summarize the customers' requirements. In most cases, enterprises want to integrate their SAP data into a data lake in order to process them further in big-data scenarios and for advanced analytics (usually also in combination with data from other source systems).
As part of their current modernization and digitization initiatives, many companies are deciding to move their data warehouse (DWH) or data platform to the cloud. This article discusses from a technical/organizational perspective which aspects areof particularly important for this and which strategies help to minimize anyrisks. Migration should not be seen as a purely technical exercise. "Soft" factors and business use-cases have a much higher impact.