“Traditional” (as long as you want to call several years “traditional”) BI Frontend tools increasingly have an In Memory engine to load data into the RAM to calculate measures a thousand times faster, give a more natural behaviour in changing visualizations like switching from bar to line chart or store uploaded data in memory to combine them with data loaded from a DB.
This worked out very well as the customers got more familiar with the tools and their adoption in the departments grew. It also had a nice side effect that these tools created a new market for “self service BI tools”. So no reason to worry, then?
Yes, there is. The problem with the rise of In-Memory Databases like SAP HANA brings new topics. If you have an In Memory Database you want to get the most out of it, including the speed when working with the data in Frontend. You don’t want to wait on your data when they are loaded from the DB. You want instantaneous results. Also you might have really complex calculations in your DB generated via several functions and don’t want to do these calculations again in the In Memory Engine of your Frontend (as long it is possible to do these again).
That’s why there has to be a paradigm shift in the Frontend market to not store every data in the In Memory Engine of the frontend, but only the data e.g. of dimension values to speed up selections or data uploaded via files from the customer desktop. This will still be very important to support the growing “agility” in the departments when they’re working with BI tools to support their daily tasks in decision making.
This requirement of agility is needed to get results earlier by adopting the need for new insights much faster.
Earlier, displaying the data wasn’t an issue. Then there were these fast growing companies with the really successful self service BI tools which had a really important impact on the acceptance of such “magic” tools (compared to the classical MS Excel). They smoothed the way for a more data centric approach in decision making. Thanks a lot to them!
But now it’s time to make a little shift. Not as much as you might think or as much as it was with the spread of self service BI tools. But this small step will be more and more important, as companies will work in the area of Big Data. As I mentioned in my last post one key factor for Big Data will be visualization, because the Frontend is the only interface the user is interacting with in Big Data. And this depends massively on the performance, usability and ability to work with different data sources really quick of the Frontend tools.
And if you have Peta, Exa or what else byte stored in your Hadoop cluster, do you think these will fit in the memory of your server when you need several servers as a Hadoop cluster
No, because the data is growing much faster than Moore’s Law.
That’s why it’s important to leverage the full potential out of the In Memory Database Systems, keep the processing and calculation where it will be processed best: in the Database. And let the Frontend tools do what they can do best: visualize data and information and gain insights by displaying the information in a smart way and give the users the ability of working agile in an agile world.
To conclude: The really great BI tools with their own In Memory engine which smoothened the way of BI adoption in departments were a great success. But in the age on In Memory Databases and Big Data, they have to make a little shift to empower the speed and capabilities of In Memory Databases and leveraging the full potential of Big Data.
See how Capgemini can support you selecting the right strategy and tools to work in an Insights Driven Business.