Moving Big Data delivery from the West Coast to the East Coast—part 1

Publish date:

Over the past few years we have started to see a paradigm shift in the capabilities that enterprises are using to deliver Big Data projects.

If we go back in the recent past to 2014, we would see very highly skilled software engineers deliver data pipelines using a variety of low-level tools.

As these projects have matured a number of other open source tools (I have used the term West Coast for this) have appeared onto the market in order to bring some enterprise stability and management over the data pipelines. These would cater for relatively mature enterprise capabilities like data pipeline execution, monitoring and reruns.

Over the past 30 years there has been a stable enterprise capability (I like the term East Coast for this) in this area with products such as BMC Control-M and IBM Tivoli. What is not well known is that these enterprise products also provide Big Data pipeline execution, monitoring, and rerunning capabilities.

Utilizing enterprise capabilities that companies already have in their application stack will reduce the reliance on new open source products or home-grown capabilities, increasing reuse of current license agreements and also de-risking Big Data projects.

Related Posts

AI

Beyond the AIOps hype: Part 2

Sindhu Bhaskaran
Date icon August 7, 2020

In this, the second in her blog series exploring AI for IT operations (AIOps), artificial...

AI

Interconnected World Of Big Data and Other Technologies Redefining The Future

Sumit Kumar
Date icon July 31, 2020

Data has always been an important asset in business, however the value of data was redefined...

AI

Beyond the AIOps hype: Part 1

Sindhu Bhaskaran
Date icon March 31, 2020

AIOps (AI-enabled IT operations) offers unprecedented opportunities to get to grips with even...