David Loshin explores considerations for organizations gradually making the transition to Hadoop.
On almost all of my master data management (MDM) consulting engagements, someone on the client team inevitably asks how MDM is different from data warehousing. This question is both an understandable and important one. It’s usually not that people are confused about MDM’s focus on master data, as opposed to reference data [...]
It's that time of year again where almost 50 million Americans travel home for Thanksgiving. We'll share a smorgasbord of turkey, stuffing and vegetables and discuss fun political topics, all to celebrate the ironic friendship between colonists and Native Americans. Being part Italian, my family augments the 20-pound turkey with pasta – […]
The post 3 Thanksgiving lessons about data warehouses, Hadoop and self-service data prep appeared first on The Data Roundtable.
In my last post, we explored the operational facet of data governance and data stewardship. We focused on the challenges of providing a scalable way to assess incoming data sources, identify data quality rules and define enforceable data quality policies. As the number of acquired data sources increases, it becomes […]
In Part 1 of this two-part series, I defined data preparation and data wrangling, then raised some questions about requirements gathering in a governed environment (i.e., ODS and/or data warehouse). Now – all of us very-managed people are looking at the horizon, and we see the data lake. How do […]
The post Data preparation and data wrangling, Part 2 (yippee, bring your lasso) appeared first on The Data Roundtable.
Data governance can encompass a wide spectrum of practices, many of which are focused on the development, documentation, approval and deployment of policies associated with data management and utilization. I distinguish the facet of “operational” data governance from the fully encompassed practice to specifically focus on the operational tasks for […]
The post Data validation as an operational data governance best practice, Part 1 appeared first on The Data Roundtable.
I'm a very fortunate woman. I have the privilege of working with some of the brightest people in the industry. But when it comes to data, everyone takes sides. Do you “govern” the use of all data, or do you let the analysts do what they want with the data to […]
The post Data preparation and data wrangling, Part 1 (yippee, bring your lasso) appeared first on The Data Roundtable.
It's the age of big data and the internet of things (IoT), but how will that change things for insurance companies? Do insurers still need to consider classic data warehouse concepts based on a relational data model? Or will all relevant data be stored in big data structures and thus […]
Why they will still play a valuable role in organizational data-management and -integration efforts.
The post ETL and data warehouses are dead! Long live ETL and data warehouses! appeared first on The Data Roundtable.
Auditability and data quality are two of the most important demands on a data warehouse. Why? Because reliable data processes ensure the accuracy of your analytical applications and statistical reports. Using a standard data model enhances auditability and data quality of your data warehouse implementation for business analytics. Auditability by […]
The post Don't let your data warehouse be a data labyrinth! appeared first on The Analytic Insurer.