While I have my head in the clouds, I should mention that Vertica has a cloud solution that they manage for you. Not new, but gives some perspective.
With competitive offerings in the $10-20k per terabyte, this is an attractive offer and a great way to try before you invest when you have that much data.
I hear Vertica is a screamer, but I can’t imagine getting sub-second results for 3 TB of data on 3 virtualized servers, for the same reasons I gave in my previous post.
There’s a point where query response time is low enough that it changes the analysis game completely. This is the amount of time that a decision maker is willing to wait to get the next answer. Not the first answer, but the next one, and the next one. Eventually the frustration of waiting is worse than not knowing.
Salesperson: “What shipped yesterday? Ok, what’s the breakdown? Woah, what happened in that department? That markdown is too steep, who wrote that order? Which customer? What’s that rep’s extension?”
With one-second results, that analysis would have happened in the time it took you to read it. This is a competition against human nature. One-second results makes the difference between wishing you had the answer and getting it, multiplied over and over throughout the day.
The impact on a business is not from faster queries alone. Behavior changes when decision makers trust that the data is immediately at hand. The relationship to data changes when you can find the answer while you think about it and not lose your train of thought.
Because the query engine can respond to any query in one second, we can make every path of exploration available at the beginning. One application can take the place of many reports. Users can begin to query immediately and along any drill path. The benefit of one-second results is diminished if users have to first identify the report that has the data and filtering options they need.
Can OLAP deliver this? No. We must combine speed of execution with rapid application development, full transaction details, and eliminate predefined drill paths. OLAP/MOLAP/ROLAP/SCHMOLAP can’t take us into this new era. In-memory associative and column-store databases can.
With one-second results, you don’t build a query and then start the execution. Instead, the results update as soon as you pick the first filtering option, whether it’s the day, order number or country of origin. You get immediate feedback before you make your next selection. Also, the filter options can change based on the results. Maybe you remove options that are incompatible with the selections made so far. By shrinking the feedback loop with one-second results, the filtering options can show intelligent behavior to help guide users or add context to the results. This level of dynamism lets users roll back and forth through their ideas. They can cross-reference without losing a train of thought, or discover and follow tangents that are more important.
It’s not just one decision maker getting an answer quickly. Interactions and processes benefit. Workers get feedback in near-real-time. We can do tricks like running the same query once per second. Ridiculous? This isn’t paradise, I live in the land of low budgets and “getting it done”. Vendor and customer data is available right when they’re on the phone. Less “I’ll get back to you” and more “I have that info right in front of me.” I’ve also noticed that it’s harder to bullshit when anyone in the meeting can easily explore the data on their laptop and get the real answer.
In companies where I can deliver one-second results, I spend a lot of time reconditioning people to ask for anything they desire, because now I can put any information at their fingertips, no matter how many tables, how much detail and with little knowledge of how they want to look at the data.
For nearly all companies, the entire transactional database can be copied as-is into a one-second query engine. Add a BI tool on top, rename some fields and identify the table relationships. Time is spent developing the frontend to deliver the best reports and analysis. One person can build the entire solution. Since the transactional model is already validated, there is no data modeling, no formal architecture and little documentation. This might be frightening to enterprises but the benefits are huge for strapped IT budgets.
A one-second query engine needs an interactive frontend to take advantage of it. We also need simpler ETL tools. With the engine in place first, developers will connect the dots and the tools will be built to take advantage of the new abilities.
None of this is theoretical. I’ve been doing this for the past 7 years with an in-memory associative database, ETL tool and interactive frontend called QlikView. When information flows at the speed of thought, it changes decision-maker behavior and the business process. When we can prototype and deploy one-second query engines quickly, then ideas can be built and tested quickly. Most ideas won’t be new or unexpected, but they were impossible or impractical without one-second results.
I wondered if InfoBright would do this. Before going open-source their website described the product as a kind of bulk-storage and not a data warehouse. A place to put data that you need to remain accessible but which you don’t need to query fast or frequently. That was the enterprise story. As an open-source project, I think they have a much more compelling value proposition. It’s the democratization of analysis. Try before you buy (the Enterprise Edition). Rapid prototype / rapid failure. Connects to any SQL tool, platform or language. As easy as working with MySQL.
My test data set is 37 million rows of point-of-sale transactions. Total data size as CSV is 7GB. My test system stinks. I need to make that clear so that my numbers are not seen as representative of what’s possible with InfoBright. After seeing the product in action, I’m sure that server hardware will do much better.
How fast to bulk load?
InfoBright loads are multi-threaded, but my test server is a single-processor desktop and the loads are still fast! With my single processor, about 1.8 million rows/minute (336 MB/min) are being inserted and the load rate slowed down about 10% over 37 million rows. Disk access was minimal as records were inserted. Overall, my little desktop moved an average of 30,000 rows/sec or 5.6 megabytes/sec. That’s 20GB/hour! My processor was fully loaded every second. With faster cores and multi-threading, the load should be much faster. When I get the chance to load Linux on a bigger box I’ll be eager to see how it performs.
How big on disk?
I have 7GB of data. Using MySQL’s default MyISAM storage engine with an 8-bit ASCII representation requires… 7GB. No surprise there. InfoBright took 591.2MB, as reported from my MySQL management console. That’s a 92% reduction in size or a 12:1 compression ratio.
The status data coming from the InfoBright engine includes the storage size of each column and total size of the table. If I could remove the lowest-level detail, InfoBright reports exactly how much space that would save. Helpful.
How much memory?
I don’t have much guidance because I don’t have enough data to stress the cache. My largest data set can fit comfortably inside the compressed cache. That means every company I’ve ever dealt with would be able to avoid disk reads and improve performance. Unfortunately, this does not put InfoBright’s performance on par with other in-memory databases. More on this later.
Here are some guidelines from InfoBright on the memory (in megabytes) that you should allocate given a certain amount of system memory. These figures have no relationship to the size of your data set. I also don’t know if 32 GB represents an upper limit for the InfoBright software. I suspect the point to this table is that the loader heap does not need to increase and that the compressed heap should increase the fastest but will not exceed the main heap.
|# System Memory||Server Main Heap Size||Server Compressed Heap Size||Loader Main Heap|
ServerMainHeapSize – Size of the main memory heap in the server process, in MB
ServerCompressedHeapSize – Size of the compressed memory heap in the server, in MB.
LoaderMainHeapSize – Size of the memory heap in the loader process, in MB.
Is it fast? Slow? My hardware is too restrictive to see what InfoBright can do. All signs are promising. What I can say is that the cache grew over time until MySQL was barely touching the disk. My processor is completely peaked, with 99.8% allocated to the MySQL process. According to this article published by MySQL yesterday, InfoBright queries are (for now) restricted to one CPU core. Performance is dependent on the size of my cache and the speed of each core, two things I have direct control over.
Even with my little desktop testbed, this much is clear: the QlikView in-memory database is much faster. On this dataset I’d see results in a split-second instead of 30, 60 or 120 seconds. You might think that comparing these two products isn’t fair, but if your goal is to deliver analysis in SMEs or enterprise departments, these two will definitely compete and complement one another.
One of the advantages of column-stores for data warehousing is that simply replicating the original transactional schema can yield adequate performance. Also, there is no performance hit for bringing in the lowest level of granularity. With column-stores, you may not need to build snowflake schemas or do much transformation. Column-stores are therefore less effort to get started in smaller companies with resource-starved IT departments. This means a faster failure rate which is what interests me most: implement quickly, measure early impact and choose investment (InfoBright Enterprise), deferral or elimination.
There is one other free column-store database of significance, MonetDB. It’s an academic project and as such it lacks the toolset and polish that InfoBright inherited from MySQL. I was up and running faster with InfoBright than I was with MonetDB because the installers and administration utilities for InfoBright are already familiar. My Windows tools for MySQL connected right in without a problem. My front-ends with simplified MySQL connectors were oblivious to the InfoBright backend, which is absolutely how it should be.
InfoBright is not without its issues. Documentation is thin or non-existant. I spent hours and hours until I determined (and confirmed on the forums) that the InfoBright loader does not support all of the MySQL syntax for bulk loads. This would not have been such a problem if the error message had provided some warning about my syntax that was perfectly legal in standard MySQL.
All in all, I’m thrilled to have a no-cost column-store database available for prototyping, quick and dirty applications, and bulk data storage.
Over the weekend I have revisited Tableau, enjoyed some success with MonetDB, tried to turn MySQL into a hundred million row data warehouse, been underwhelmed with Firebird, installed Greenplum and spent many frustrated hours with Talend Open Studio, Pentaho Kettle and Jitterbit.
Of course, I could just buy QlikView, but what can be done for less $money? Unfortunately data warehouses and BI front-ends are not sexy problems in the opensource community. Graphs and charts get a little more attention, but you’ll need to write your own code to glue them to your application.
In summary, what can I say about our options?
First, write your own ETL. Why do opensource ETL tools like Talend and Kettle work so hard to rebuild Informatica? It reminds me of Linux in the 1990s when the community wanted to beat Windows and kept working to look like Windows and wondering when victory would arrive. Informatica, like OLAP and mainframes, is from an era when memory was scarce; languages were low-level, slow to compile & run, abstracted little and were not at all portable. On top of that, ODBC drivers were tightly controlled and costly.
But now we can pick from many great scripting languages. Today’s languages abstract the hard parts, are easy to read, can be edited while executing and talk to any system, database, web service or application. I think the next direction for ETL will be a simple (but extensible) transformation language using an ORM wrapper… Rails on ETL. Until that arrives, you can achieve everything you need with PHP, Perl, Ruby and others.
Best option for low-cost data warehouse?
Gartner released the updated quadrant for DW DBMS software and appliances. DATAllegro seems too far below Netezza in ability to execute. DATAllegro has large, proven installations. Their recent releases run on Dell blades with EMC storage instead of the customized FPGAs of Netezza. And how is Greenplum rated higher than DATAllego? (via DBMS2)
I finally got around to watching the Tableau 3.0 webinar. I agree with their very excited presenter that Tableau 3.0 is a leap forward. The support of ad-hoc grouping of dimension elements is excellent as is the enhanced support of ad-hoc sets. The annotations look good and act sensibly. Generally, the new features are focused on ease of use, better statistical analysis, and report clarity. All good things. Here are 3.0 examples.
Annotations should be required in every BI tool. The ability to mark reference lines and data points on graphs and tables is critical to clear communication. Placing an annotation on a point in space does not require a data point to exist there, another nice feature. The smart BI vendors are focusing on collaboration and communication among users.
“Groups” stole their name from the “groups” of 2.x which are now the “sets” of 3.0 and can be used like so: similar dimensions such as coffee and tea, which may need to be represented in the database as separate product lines, can now be combined on the fly within Tableau by an end user under the simple heading “drinks”. This would make it easy to answer a question about food vs drink sales without the need to export to Excel and spend more time adding up the drink categories. In short, “groups” bring dimension values together and “sets” allow for separating special values from the rest of a dimensions values–and both can be done by the end user. Pretty nice.
I think the strongest competitor for visualization is Spotfire. However, Tableau’s use of live database interaction will become an advantage as data warehouse implementations shift to high-performance in-memory read-optimized databases. Was that over-hyphenated? Spotfire’s initial data loads are inflexible and I wouldn’t recommend it if you need to update a large dataset frequently.
Unlike QlikView, all of Tableau’s data needs to be in a single database. With good design, this is not a performance issue. The problem is that the extra expense of hardware and software to store a separate data warehouse and run ETL processing may push Tableau’s final price tag far above QlikView, which can easy pull from multiple sources and uses its own high-speed database.
The ideas in this paper will be incorporated into the Vertica database product. And unfortunately it won’t be open source. At least that’s what one company employee commented on Slashdot.
In the same way that RAID design options (e.g. 1, 5 and 10) can accommodate multiple drive failures, the Vertica system will distribute the same slice of the database to several servers. A grid of commodity hardware can act as a high-availability system and Vertica’s shared-nothing architecture enables this feature without complex design or execution.
We call a system that tolerates K failures K-safe. C-Store will be configurable to support a range of values of K.
Inserts and updates are performed on a separate data store and merged in batches. Deletes are marked with bitmasks. Rather than building a complex locking scheme for grid members, data in the read-only and write stores is stamped with a time “epoch”. Queries specify an epoch. It’s an elegant implementation that is very well suited to a data warehouse.
Started by a major contributor to the Ingres and Postgres projects, Vertica is implementing a read-optimized database that is an excellent fit for the data warehouse world. Given the founder’s support of open-source, I expect this company will follow the hybrid commercial/FOSS model of MySQL and others. Some core design features include highly compact storage, total ad-hoc read optimization, and using a shared-nothing grid design that is dead easy to implement with commodity (not High-Availability) hardware. Via Slashdot.