CEO Aaron Auld gives CBR his take on recent big tech news such as Dell-EMC and HPE-Micro Focus.
Technology years are like dog years; with its pace of change the view of the world a matter of weeks ago can be distinctly different from the world as we experience it today. Taking last month as an example, in the course of a month we heard that Dell transitioned from a company that sells PCs and peripherals to a company called Dell-EMC which also sells enterprise storage and software (but no longer printers). Hot on the heels of that announcement, Micro Focus transitioned from a relatively unknown UK consulting business to owning most of HPE’s software business, including analytic database Vertica.
Vertica users must be wondering what is next, having been passed from pillar to post in
the last couple of years. The company started out as a small independent company in 2005 and after initial success was acquired by Hewlett Packard in 2011. In 2015 the parent company was spun off to form HPE and now, less than a year later, it has changed hands ago to be part of Micro Focus – although nothing official has landed on the Vertica homepage.
This turn of events follows quickly on the newest development coming out of Actian, who acquired ParAccel in 2013. It has recently been revealed that the company is now pulling the plug on its Actian Analytics Platform, which includes analytic databases Actian Matrix and Actian Vector, in order to focus on operational data management and data integration technologies.
Consolidation, acquisition and “spin-merges” are nothing new in technology. Data analytics in particular has seen a lot of change lately, with huge interest in open source big data tools such as Hadoop and Hive and more recently, Spark. This growth in open source technology has forced vendors in the data analytics space to think deeply about the relevance of their value propositions and more often than not to hedge their bets by integrating the open source tools into their offerings.
Indeed, you would be challenged to find a vendor’s analytic database website that doesn’t extoll the features of integrating Hadoop into their offering. We certainly do. That’s because Hadoop is a most sensible approach for storing and retrieving your data. It is a great location for your growing “data lake”, but here’s the thing: if you need to run interactive queries on your data you still need a fast database. And if high performance matters to your business, then you need a very fast database.
It’s all about finding the right tools for the job and combining EXASOL’s MPP in-memory database with Hadoop can be a match made in heaven, as many of our clients find.
But what about Spark?
On the EXASOL website we run a “Question of the quarter” section, however for our sales team the question of the year has to be “what about Spark?” Apache Spark is an open source cluster computing framework for data processing, it’s only about two years old but has already made a name for itself – partly because of the claim that it replaces the short-comings of Hadoop MapReduce with its fast query times. With Spark SQL you even get an SQL query engine.
It sounds great, and when it is properly established it very well could be, always assuming you have a Spark-shaped problem you need to solve and plenty of time to implement it. The truth is, it’s a very technical offering that starts out as a general-purpose platform and requires no little effort to tune and optimise it.
But then again how much time and effort do you plan to invest since some people are now saying that Apache Spark is already out of date and Apache Flink is the next big data thing. With Flink you could get all of the advantages of Spark without the limitations on data latency.
Anyone trying to stay up-to-date with the latest thinking in data analytics is in for a tough time. It changes all the time. Such is the new analytical world we live in. Vendors change, strategies are abandoned, tools quietly cease to be supported and the “flavour of the month” for open source data projects is in constant flux.
In some ways EXASOL has had an easy time of it. When we started back in the early 2000s we concentrated on just one single thing: creating a fast analytic database. And understanding the importance of focus, we designed the database to be not only fast, but also highly automated, thus permitting our customers to concentrate on what really matters to them. We have been diligently working on that idea ever since. We have stayed completely independent, stuck with our strategy and the product today is lean and easy to use. Did I mention that it’s very fast?
We are fortunate that the market has moved toward us in many ways – the rise of data analytics and the proliferation of business intelligence tools makes EXASOL more relevant than ever. I don’t think that will change any time soon.