At Deep, we understand that almost every visionary use case and technology trend is dependent on Big Data. In order to build effective technology that supports these demands, we need to ensure an open conversation that helps the industry fully understand its breakthroughs, limitations and challenges. We believe that we cannot move the industry forward without providing fair and transparent tests for breakthrough solutions in the space. Therefore, we have committed ourselves to publishing fair and transparent benchmarks, as this is the only way to create a conversation that will move technology forward.


 

This means that when we put up performance benchmarks against a competitive product, we will always:

  • Highly tune the competitive product(s) – tuning our product but leaving another with “out of the box” configurations is not fair.
  • Disclose every aspect of the test(s) – we will show you all of the configuration settings that we use so that you can easily replicate our test results on your own.
  • Always include a standard test – we want you to have an apples-to-apples comparison of how we perform on a test that you are used to. If no standard test exists for a particular scenario, we will explicitly state that.
  • Always listen – we don’t have all the answers and we believe that you, the community, are the best feedback mechanism for how we should advance our industry’s discussion. If you think that Deep could be doing something better, more transparently or more fairly, please let us know.