“We have moved on to newer, faster, more reliable, more agile, more versatile technology at lower cost and higher scale”
Sometimes all it takes is a gentle prod.
Twelve weeks ago Oracle’s Larry Ellison took to the stage to announce the latest additions to his company’s autonomous database service.
When you’re Larry Ellison it’s hard not to get sidetracked though.
“It’s kind of embarrassing when Amazon uses Oracle but they want you to use Aurora and Redshift,” he said, referring to Amazon’s rival database and data warehousing products. “They’ve had 10 years to get off Oracle, and they’re still on Oracle.”
In latest episode of "uh huh, keep talkin' Larry," Amazon’s Consumer business turned off its Oracle data warehouse Nov 1 and moved to Redshift. By end of 2018, they'll have 88% of their Oracle DBs (and 97% of critical system DBs) moved to Aurora and DynamoDB. #DBFreedom
— Andy Jassy (@ajassy) November 9, 2018
The comment was clearly salt in the wound to Amazon.
This Friday the company announced it was nearly done with Oracle: Amazon’s consumer business will have moved 97 percent of critical system databases off Oracle and onto Aurora and DynamoDB by the end of 2018, AWS manager Andy Jassy said.
Amazon CTO Werner Vogels chimed in with an “RIP”…
Amazon's Oracle data warehouse was one of the largest (if not THE largest) in the world. RIP. We have moved on to newer, faster, more reliable, more agile, more versatile technology at more lower cost and higher scale. #AWS Redshift FTW! https://t.co/AE50r7MUmW
— Werner Vogels (@Werner) November 10, 2018
Oracle has repeatedly emphasised rival dependence on its offering, with Elliot telling analysts that his company is “way ahead of Amazon in cloud infrastructure technology.”
It has also been making a major push with its new autonomous database portfolio: last month the company announced the general availability of its “self-driving” Autonomous NoSQL Database, the newest addition to the Oracle Autonomous Database portfolio, which was released with much fanfare late last year.
AWS Oracle Battle: Cost, Scalability Dispute
The company said that the database uses machine learning and automation capabilities to deliver a NoSQL database with 99.995 percent availability: it claims this can be delivered at up to 70 percent lower cost than rival Amazon’s DynamoDB.
In August Oracle unveiled the Oracle Autonomous Transaction Processing Cloud, which has been designed to handle complex sets of high-performance transactions as well as mixed workloads, e.g. batch processing, reporting, Internet of Things data, etc.
That was the second of Oracle’s cloud-based autonomous databases, with the first, Oracle Autonomous Data Warehouse, optimised for analytics.
With Amazon moving off Oracle’s data warehouse, the latter is trying to get customers to move in the opposite direction, recently releasing a SQL Developer Amazon Redshift Migration Assistant, that provides a framework for migration of Amazon Redshift environments on a per-schema basis.
Read this: AWS Releases New Pricing Calculator
A post by David Yahalom meanwhile, the CTO and co-founder of NAYA Tech, a San Jose-based database specialist, on AWS’s blog last year pointed to “major differences” in how Amazon chooses to provide scalability and availability in Aurora vis-a-vis Oracle RAC.
He wrote: “These differences are due mainly to the existing capabilities of MySQL/PostgreSQL and the strengths that the AWS backend can provide in terms of networking and storage. Instead of having multiple read/write cluster nodes access a shared disk, an Aurora cluster has a single primary node (“master”) that is open for reads and writes and a set of replica nodes that are open for reads with automatic promotion to primary (“master”) in case of failures.”
“Whereas Oracle RAC uses a set of background processes to coordinate writes across all cluster nodes, the Amazon Aurora Master writes a constant redo stream to six storage nodes, distributed across three Availability Zones in an AWS Region.”