What is a Data Fabric?

Updated: Dec 23, 2020

Data Fabric is one of the most talked-about technologies to emerge in a very long time. It’s been the subject of talks and blog posts, won awards at some of the biggest conferences in tech, and even earned its own Gartner Hype Cycle report. But all this attention has actually helped muddy the waters around this nascent technology with conflicting or incomplete definitions. So what exactly is Data Fabric?

Data Fabric is a new approach to data management. It’s called a “fabric” because of this interwoven structure, but it could also be called a true data network, and it is closely related to a graph database. Its approach to handling data is based on the way information is stored in the human brain, using principles like plasticity and continuous reorganization to optimize the connections between points of data and create a more efficient data structure.

These features make Data Fabric the newest evolution of data, an actual alternative to data silos, and the beginning of the end for point-to-point integration. Let’s unpack what this all means.

The highest evolution of data.

To understand Data Fabric, it’s important to look at the history of data management to this point—particularly, the relationship between retaining control over data and having the ability to share data easily. Data Fabric is the first major change in data since Oracle introduced the relational database in 1979. While Data Lakes, Data Warehouses, and other solutions may present themselves as “a new way to handle data,” the truth is that they’re all fundamentally managing data the same way as the relational database was 40 years ago.

Over the long history of recorded data, from the first cave paintings and petroglyphs all the way to the latest cloud computing solutions, data has become easier to share but harder to control. This is primarily due to data copying, which has run rampant in the age of digitized data.

Data copying was the only real solution for sharing data for a very long time—consider that widespread data copying can be linked to the invention of the printing press in the mid-15th Century. But it was always problematic; as soon as a copy of data is made, the original owner sacrifices control. You can’t dictate exactly who can see the new copy (though you may give it to one person in particular, they can then pass it along to someone else) and you can’t control what they do with it once they have it. Anyone who has run into versioning issues with a project knows the danger of having multiple copies of data floating about, as changes made to one copy aren’t reflected automatically in the others. And that’s not to mention fakes, forgeries, and other vulnerabilities. Data’s transition from the paper age into the digital age did little to fix any of these, and in fact the problems only became worse as digital data removed any physical restrictions from making copies.

Want to learn more about the Data Fabric? Check out Episode 1 of our Learning Series, where Cinchy Co-Founder & CEO Dan DeMers will talk about the role of Data Fabric in your target state architecture.

Data Fabric promises an end to data copying, using a permissions-based system to control access to a single copy of data instead. Anyone who has worked in a cloud-based productivity suite should be familiar with the way this works—when you share something, you aren’t sending a copy. You are saying exactly who can view, edit, comment on, or otherwise use the data you are providing. Because it’s not a copy, you aren’t sacrificing control and, because of Data Fabric’s unique properties, these permission settings are preserved wherever that data appears.

This is the crucial reason Data Fabric is able to replace copies with permissions-based access. Until now, there simply wasn’t a data infrastructure available that could retain such permissions regardless of how the data was being accessed. By making this change and eliminating the need to copy data, Data Fabric offers an immediate solution to one of data’s biggest problems.

This elimination of data copying, and the presence of permission-based access controls, are key attributes of Data Fabric.

An end to data silos.

The interwoven approach to data that gives Data Fabric its name is also the first technology that can truly end data silos. While we’ve all been told that data silos get in the way of productivity and should be removed, there hasn’t really been a real solution until now. That’s because there was never any way around app-dependent databases before Data Fabric.

Data has always been tied to the application that creates it, which is why data silos and point-to-point integration are such persistent problems in today’s data architecture. Solutions that attempted to break down silos before often were just building bigger silos, as they did nothing to address the fundamental problem of app-centric databases.

Think of each application as “speaking its own language” when it comes to data. App-specific databases are then built in this same “language,” which is why the data ends up in silos. Point-to-point integration efforts are an attempt to remove these silos, but these integrations only work for a specific solution and new integrations are needed every time a new project is started.

Data Fabric offers the only true alternative to the app-centric database, as it can handle data from many different applications without the need for ongoing point-to-point integrations. Think of it as a data repository that “speaks every language” when it comes to data. It can even support autonomous data, which is independent from applications entirely.

This universal compatibility is another unique characteristic of Data Fabric.

A gradual replacement for point-to-point integration.

Point-to-point integration inhibits productivity. But so do massive paradigm shifts in your technological foundations. Fortunately, Data Fabric is a solution to both these issues. Implementing Data Fabric shouldn’t require a massive technical project, even for the most robust enterprise. Instead, a Data Fabric is designed to start with a single project or solution and to grow organically from there as more and more data sources are added to it. This means there is no downtime for your organization while you “move everything onto the Data Fabric.” Simply start by connecting what’s needed to solve one particular problem, and go from there.

Want to learn more about the Data Fabric? Check out Episode 1 of our Learning Series, where Cinchy Co-Founder & CEO Dan DeMers will talk about the role of Data Fabric in your target state architecture.

A Data Fabric should never require a massive, one-time implementation project. Standing up your Data Fabric should start small, and it should be able to grow over time.

This leads to another of Data Fabric’s unique capabilities: the possibility for enterprise-wide network effects for data. Network effects are a phenomenon where a network becomes more powerful and more efficient as more sources are connected to it. Data Fabric enables network effects because all data on the fabric is part of a network that makes it useful and usable to any source connected to that network. In other words, the data source you connect today will work efficiently with the data you put onto the fabric a year ago, or a year from now. Efficiency scales as the network grows (compared to the old model of point-to-point integration, where projects become harder to produce when they require more sources of data).

Only Data Fabric offers such network effects for data.

Data virtualization made real.

If you’re familiar with the data virtualization approach to data management, some of these properties may sound familiar. But there’s a key difference between Data Fabric and data virtualization: Data Fabric is a real change to the physical structure of your data, while data virtualization only simulates this change.

Want to learn more about the Data Fabric? Check out Episode 1 of our Learning Series, where Cinchy Co-Founder & CEO Dan DeMers will talk about the role of Data Fabric in your target state architecture.

Data virtualization is a powerful tool, but it isn’t real change. It simply hides the problems of data silos, rampant data copying, and cumbersome point-to-point integration behind a virtual layer of organization. It’s like putting on VR goggles and looking at a perfectly clean kitchen instead of actually doing your dishes.

Data Fabric takes the promises made by data virtualization and enacts real change. It isn’t a functional layer added to a problematic architecture, it is a whole new way to manage data.

Ready to take your understanding of Data Fabric to the next level? Schedule a demo!

Desktop HD.jpg
Cinchy is a leading vendor for autonomous data fabrics.
Learning Series
Contact Us
Reviews on G2 claim Cinchy as leading data fabric vendor to help reduce time to market and IT project costs
Sign up for news & events
  • White Twitter Icon
  • White LinkedIn Icon
How data should work

© Cinchy 2020 All Rights Reserved