Last updated on May 11th, 2015 at 11:46 am
The power of reliable databases is starting to become prevalent on the radar of businesses, especially with terms like Big Data and data scientist being thrown around so frequently. Technology giants have been creating enterprise products for their users in order to properly store and compute larger sets of data and now Google is adding to that list with their newest product: Cloud Bigtable.
You have probably already experienced the power of their Bigtable data storage system because it is the same project that runs their services such as Gmail, Google Analytics, and Google Search. Considering that Google stores an estimated 15 exabytes (1 million terabytes equals 1 exabyte) of data according to Cirrus Insight, Bigtable has the flexing power to help any large scale business out there with their data storage and processing.
Bigtable is a NoSQL database, which means it can handle the everyday workload of a business that requires a highly active database. That means accessing the data daily, storage, and providing the proper processing power that can be easily scaled for a company’s growing data needs. NoSQL databases are growing in popularity because of their efficiency as well as added perks such as automatic repairing of any errors and needing lower maintenance compared to a traditional database.
However, compared to other alternatives, Google’s Cloud Bigtable has a big highlight: it is compatible with the ever growing Apache HBase API. That means Cloud Bigtable can easily integrate with Hadoop applications which are important big data management and make managing large datasets much easier than before. If a business doesn’t want to use Apache HBase APIs, they can also use Google’s own Cloud Dataflow processing services.
On top of the APIs, Google is promising that Cloud Bigtable is going to be very fast to set-up, a business will be up and running in a matter of seconds and scaling is automatically done by Bigtable in order to accommodate for the needs of the business. Latency, which is the “lag” found between when a user tries to retrieve data and when they actually are showed that data, is some of the smallest found in the industry at six milliseconds according to TechCrunch. Cassandra, a competitor, has a latency between 156 and 285 milliseconds, for example.
Single digit latency is good news for businesses because it means faster retrieval of data, faster analytics, and faster reporting for the best efficiency possible. While the latency is in milliseconds, all that time can really add up for time sensitive projects and it can also be a sign of needing to update the database to something larger when the latency begins to rapidly increase. Having to move large databases can be expensive and time consuming if there is no scalability procedures already put in place, which is where the automatic scaling in Cloud Bigtable comes into play.
Cloud Bigtable shouldn’t be confused with Google’s other NoSQL database service, Cloud Datastore, which is aimed at developers building on the App Engine. In contrast, Cloud Bigtable is focused on enterprise level business and companies with large, complex processes that need to be number-crunched. The new NoSQL database product is currently in beta.