Blog

You can find tangible know-how, tips & tricks and the point of view of our experts here in our blog posts

Nahaufnahme von Händen auf einer Laptop-Tastatur
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data Warehouse Automation (Part 1)
Data Warehouse Automation (Part 1)

Data Warehouse Automation (Part 1)

The automation of repeatedly recurring tasks is one of the most fundamental principles of the modern world. Henry Ford recognised resulting advantages, such as a falling error rate, shorter production cycles and consistent, uniform quality. These very advantages can be applied in data warehouse initiatives.

Read more
Performance Lookups in BW Transformations - Initial Aggregation of Selected Data
Performance Lookups in BW Transformations - Initial Aggregation of Selected Data

Performance Lookups in BW Transformations - Initial Aggregation of Selected Data

We now know how we can select the correct data, which type of tables we should use with lookups and how we can ensure that we only read through relevant datasets.

In practice it is still often the case that you must select a large and/or non-defined amount of data from the database, which should then be aggregated in accordance with specific rules for the high-performance reading.

Read more
Use Of The SCD Methodology By The Oracle Data Integrator 12
Use Of The SCD Methodology By The Oracle Data Integrator 12

Use Of The SCD Methodology By The Oracle Data Integrator 12

Part 1: Adjusting The Validity Of The Dataset

As described in the previous blog entry, the Oracle Data Integrator (ODI) offers an integrated solution for keeping a history of data with the SCD (slowly changing dimension) methodology. Upon closer consideration and when an integration quantity is loaded practically into a target table using the integration knowledge module (IKM) SCD, it is noticeable that the ODI uses certain default values for the end of the validity period of the dataset.

Read more
The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 1)
The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 1)

The Effective Use of Partition Pruning for the Optimisation of Retrieval Speed (Part 1)

In this article, I propose a way for physical organization of historical tables, which makes it possible to effectively use partition pruning to optimize query performance. The way is specifically designed for data warehouses, therefore it presumes relatively complicated data loads yet productive selections.

Read more
Performance Lookups in BW Transformation – Finding the Relevant Records
Performance Lookups in BW Transformation – Finding the Relevant Records

Performance Lookups in BW Transformation – Finding the Relevant Records

After we have dealt with the relevant selection techniques and with the various types of internal tables, the most important performance optimisations are initially ensured for the lookups, in our BW transformations.

However, this does not completely cover the topic: Because until now we have assumed that only the relevant information will be searched in our lookup tables. But how can we ensure this?

Read more
How Can I Slow Down HANA?
How Can I Slow Down HANA?

How Can I Slow Down HANA?

The good performance of a HANA database stems from the systematic orientation to an in-memory database, as well as using modern compression and Columnstore algorithms. This means that the database has to read comparatively less data when calculating aggregations for large quantities of data and can also perform this task exceptionally quickly even in the central memory.

However, one of these benefits may very quickly be rendered moot if the design of the data model is below par. As such, major benefits in terms of runtime and agility may become null and void for both the HANA database, as well as the users.

Read more
High Performance Lookups in BW Transformations - Selecting the Right Table Type
High Performance Lookups in BW Transformations - Selecting the Right Table Type

High Performance Lookups in BW Transformations - Selecting the Right Table Type

This is perhaps the most fundamental of all ABAP questions, and that not only in the context of high-performance lookups: it arises as soon as you do anything in ABAP.

Read more
High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database
High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database

High Performance Lookups in BW Transformations - The Use of Internal Tables vs. SELECTS From the HANA Database

In this series we are focusing on implementation methods for lookups where every data record in a table is to be checked. The larger our data packages and lookup tables are, the more important high-performance implementation becomes.

‍

Read more
SAP HANA – No More Memory? Implement Early Unload!
SAP HANA – No More Memory? Implement Early Unload!

SAP HANA – No More Memory? Implement Early Unload!

Optimise Main Memory Use With SAP BW on HANA

Main memory capacity is always a fascinating issue for SAP HANA and data warehouse scenarios when compared with ERP applications, or applications with relatively steady volumes of data. There is one key point to remember: make sure that you never run out of main memory space.

Read more
High Performance Lookups in BW Transformations in Practice - Introduction
High Performance Lookups in BW Transformations in Practice - Introduction

High Performance Lookups in BW Transformations in Practice - Introduction

Performance optimisations are not set in stone: optimisations that have worked extremely well at a company with a certain system architecture and a certain volume of data will not necessarily work equally well elsewhere. In other words, individual solutions are required. Fundamentally, however, the key point is always to find the right balance between main memory and database capacity, and between implementation complexity and serviceability. The focus is always on processing time.

‍

Read more
Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”
Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

Enterprise Data Warehouse and Agile SQL Data Mart – SAP BW on HANA can do both – the “Mixed Scenario”

In the scope of the use of a SAP Business Warehouse, past experience has shown that there have frequently been different approaches in companies that have led to the development of a parallel infrastructure. These tend to be managed by specific departments rather than IT.  Solutions such as QlikView, SQL Server, Oracle and TM1 are widely used. These fulfil their tasks in the appropriate situation very well - otherwise they wouldn't be so popular.

Read more
SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?
SAP BW on HANA –  Does a Cache Still Make Sense in ABAP Routines?

SAP BW on HANA – Does a Cache Still Make Sense in ABAP Routines?

With the launch of SAP BW on HANA in 2010, many previous measures for enhancing performance in BW systems became obsolete. At the same time, however, many new questions are cropping up regarding the novel platform. Here, one very relevant question is whether it still makes sense to cache the Advanced Business Application Programming routines. For with HANA, the data is, on the one hand, stored in the database located under an application server in the main memory and, on the other hand, optimised for requests. In addition, the requests in routines are executed systemically on the application server. Therefore, the question regarding the sensibleness of the use of a cache for ABAP routine requests is to be commented on in detail in the following blog contribution:

In the event of frequently recurring data, the answer is yes. If, for example, the attribute "continent" is to be read by the information object "country", the temporal overhead of access by the SQL parser, the network, etc. to HANA is recurringly too high for each line. There are several technical layers between the ABAP program and the actual data, which are to be executed repeatedly. However, if it is necessary to perform several joins between tables or if the number of lines to be read is very large, the advantage tilts towards the HANA database again.

According to my experience with customers with large data quantities, a cache in ABAP partially triples the speed of the DTP execution in a SAP BW on HANA system. Of course, this always depends on the situation (e.g. data distribution, homogeneity of the data, etc.), as well as the infrastructure that has been built up. All still without use of the shared memory. For all data packages together, i.e. per load, the shared memory performs only one request of the database. In handling, however, it is unnecessarily complicated.

Read more