Denodo's real claim to fame is its speed. Leveraging in-memory technology and optimization capabilities, it offers real-time performance that sets it apart from other solutions. Its low-code development environment is best for data scientists and citizen developers, and it does not require any programming knowledge. The company offers data virtualization, a modern approach to data integration that can connect disparate data sources and provide the necessary transformations.
Jump to: Denodo Features Table
Denodo's primary focus is on data virtualization, which it defines as "synonymous with information agility — it delivers a simplified, unified and integrated view of trusted business data in real time or near real time as needed by the consuming applications, processes, analytics or business users." Gartner listed the company as a Challenger in the most recent Magic Quadrant for Data Integration. The analyst firm also noted that Denodo's mindshare is growing and that it was mentioned in nearly 95 percent of Gartner's client inquiries regarding data integration.
Denodo claims that it can "access, integrate, and deliver data 10x faster and 10x cheaper than any other middleware solution." Its data virtualization platform offers ETL, data replication, data federation, Enterprise Service Bus (ESB) and other integration capabilities in real time. The company also boasts that in some cases it can reduce the need for replicated data marts and data warehouses. It incorporates parallel in-memory fabric and dynamic query optimization for extremely fast performance. However, it lacks some of the capabilities of other data integration tools, and the company admits that it is sometimes necessary to run ETL tools alongside Denodo.
On-premises, private cloud or public cloud (AWS or Azure)
Denodo runs on any operating system compatible with Java 1.8 or greater, including Windows, Linux, Unix and Solaris. It requires at least a quad-core processor (although some complex projects may require eight cores or more), 16 GB RAM and 5 GB of storage space (100 GB recommended).
Accepts data from most common enterprise sources, including IBM DB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, SAP HANA, GreenPlum, Teradata, AWS, Apache Hive, Impala, Spark, Presto, Microsoft Azure, Google, Facebook, Salesforce and many others.
Design and Development Environment:
Low-code, drag-and-drop interface designed for "data-oriented developers such as data engineers, power users, and citizen integrators." Integrates with Subversion, Microsoft Team Foundation server and Git.
- Data modeling
- Data catalog
- Dynamic Query Optimization
- In-memory technology
- Data masking
- Data lineage
- Data quality
- Hybrid and multi-cloud support
- Web-based interface
- Advanced security
Support and Services:
Professional advisory, assessment, quick start and engineering services, plus training and support available
|Deployment||On-premises, private cloud or public cloud (AWS or Azure)|
|Operating System||Windows, Linux, Unix, Solaris|
|Processor||Quad-core (8 cores for some situations)|
|Storage||5 GB (100 GB recommended)|
|Software||Java 1.8 or greater|
|Connectors||IBM DB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, SAP HANA, GreenPlum, Teradata, AWS, Apache Hive, Impala, Spark, Presto, Microsoft Azure, Google, Facebook, Salesforce and many others|
|Design and Development Environment||Low-code, drag-and-drop interface|
|Others||Data virtualization, in-memory technology,|
|Support and Services||Training, support, professional services|
|Gartner Magic Quadrant Rating||Challenger|