Moving Big Data delivery from the West Coast to the East Coast—part 1

Publish date:

Over the past few years we have started to see a paradigm shift in the capabilities that enterprises are using to deliver Big Data projects.

If we go back in the recent past to 2014, we would see very highly skilled software engineers deliver data pipelines using a variety of low-level tools.

As these projects have matured a number of other open source tools (I have used the term West Coast for this) have appeared onto the market in order to bring some enterprise stability and management over the data pipelines. These would cater for relatively mature enterprise capabilities like data pipeline execution, monitoring and reruns.

Over the past 30 years there has been a stable enterprise capability (I like the term East Coast for this) in this area with products such as BMC Control-M and IBM Tivoli. What is not well known is that these enterprise products also provide Big Data pipeline execution, monitoring, and rerunning capabilities.

Utilizing enterprise capabilities that companies already have in their application stack will reduce the reliance on new open source products or home-grown capabilities, increasing reuse of current license agreements and also de-risking Big Data projects.

Powiązane posty

big data

Enhancing your business model with big data

Alonso Marcos, Alberto
Date icon 2018-12-18

Take a look at this blog to learn how your organization can harness the power of big data to...

big data

Time to act – when 30% waste is just too much

Mark Deighton
Date icon 2018-05-31

Water companies are already working hard to reduce leakage, but are very aware that more...


Utilities don’t have a big data problem—they have a big problem with data

Ajay Verma
Date icon 2018-03-15

Intelligent grids are collecting vast amounts of information but utilities need to determine...