If successful proof of concept (PoC) for a data-analysis pipeline is to be followed by production, this often proves to be a long road. Ibis makes it possible to simplify this process and thus add value faster.
After successful local development of a data-analysis pipeline in Python, the code often needs to be rewritten to allow operation in production mode. But does it really have to be that way? Programmed by Wes McKinney, lead author of Python Pandas library, the Python Ibis library provides a fascinating solution for balancing data processing between the production and development environments, thus enabling analytics teams to achieve production faster. This blog post of ours shows how it works.
Inhaltsverzeichnis
Development of Reporting/Analytics Pipelines
Reporting & analytics pipelines are an important part of a data-driven enterprise. To build such pipelines, teams often use isolated local development environments to produce results as quickly as possible. Subsequently, however, they are faced with the challenge of transferring the pipelines to productive systems. The problem: Code often needs to be rewritten to allow operation in a data warehouse, for example.
One reason for this is a use of different technologies for data processing in the development and production environments. These differences result in the following challenges:
The development team needs additional knowledge of technologies from the production environment.
As a result, additional or different employees are needed after completion of initial development. Issues with employee availability can therefore delay projects.
Errors or unwanted changes may occur when code is rewritten. This can cause a loss of confidence among stakeholders.
The Python Ibis library provides a solution to unify data processing between the production and development environments. Code written with Python Ibis can run without customization in a local environment as well as databases and data warehouses.
How Does Python Ibis Work?
The first step in using Ibis is to connect to a data source. This could be a Pandas data frame in a local development environment, or a DWH table or database in a production environment.
Illustration of Ibis connection to a local SQL Lite database
The data transformation logic can then be written using the Ibis API. Python Ibis generates the code for the relevant data backend, such as Pandas, Spark or Big Query. The backends remain responsible for executing the code. As with other big data frameworks, the data transformations are executed lazily, i.e. only when needed.
Illustration of a local Ibis Connection & unit testing of Ibis code
Use Case: Churn Project With Ibis Library on Google Cloud Platform (GCP)
Suppose an advanced analytics team is working on a new pipeline for their marketing department. They want to receive daily metrics on customer churn rate for key customer segments in order to better manage their anti-churn campaign. First, the analytics team is working on a minimal viable product version using the Ibis library in Jupyter notebooks on Vertex AI (Google/user-managed VMs with pre-installed Jupyterlab) where they have stored data locally.
Once the pipeline has achieved the quality necessary for production mode, it is sufficient to replace the connection to the local data source with the corresponding tables in the data warehouse – Big Query in this case. Quick and easy transfer of the pipeline using the Ibis library allows the team to add value for the marketing department faster.
Illustration of Ibis Code first tested locally and then executed in production mode involving a DWH
So much for the basics. In the second part of this blog series, I'll explain how you can set up Ibis on GCP Vertex AI. In addition, I will use an example to show how easily a pipeline can be converted for transfer from a local data source to a DWH with Python Ibis.
Let’s Unlock the Full Potential of Your Data – Together!
Looking to become more data-driven, optimize processes, or leverage cutting-edge technologies? Our blog provides valuable insights – but the best way to tackle your specific challenges is through a direct conversation.
Let’s talk – our experts are just one click away!
Want To Learn More? Contact Us!
Your contact person
Dr. Sebastian Petry
Domain Lead Data Science & AI
Who is b.telligent?
Do you want to replace the IoT core with a multi-cloud solution and utilise the benefits of other IoT services from Azure or Amazon Web Services? Then get in touch with us and we will support you in the implementation with our expertise and the b.telligent partner network.
With Snowflake Document AI, information can be easily extracted from documents, such as invoices or handwritten documents, within the data platform. Document AI is straightforward and easy to use: either via a graphical user interface, via code in a pipeline or integrated into a Streamlit application. In this article, we explain the feature, describe how the integration into the platform works and present interesting application possibilities.
Neural Networks for Tabular Data: Ensemble Learning Without Trees
Neural networks are applied to just about any kind of data (images, audio, text, video, graphs, ...). Only with tabular data, tree-based ensembles like random forests and gradient boosted trees are still much more popular. If you want to replace these successful classics with neural networks, ensemble learning may still be a key idea. This blog post tells you why. It is complemented by a notebook in which you can follow the practical details.
Azure AI Search, Microsoft’s top serverless option for the retrieval part of RAG, has unique sizing, scaling, and pricing logic. While it conceals many complexities of server based solutions, it demands specific knowledge of its configurations.