Concur, the leading provider of spend management solutions and services, will be joining us to discuss how they implemented Cloudera for data discovery and analytics. Using an enterprise data hub, Concur was able to provide their data scientists a centralized environment that allowed for faster and smarter analytic development.
During this session you will learn about:
The end user process of building smarter analytics and how Cloudera can help
Concurs pre-Hadoop and post-Hadoop environment
Summary of key lessons and end benefits of Concur’s modern architecture
Denny Lee: Sr. Director, Data Sciences Engineering
Denny is a hands on data architect and developer / hacker with more than 15 years of experience developing internet-scale infrastructure, data platforms, and distributed systems for both On-Premises and Cloud. His key focus surround solving complex large scale data problems - providing not only architectural direction but hands-on implementation of these systems to facilitate a successful data discovery and analytic environment.
Why Hadoop slide content:
Even with primarily relational systems, it involved hundreds of sources
Getting a BI tool to connect to so many sources is … not fun
More times than not, we needed to understand a subset or aggregate of this data - not all of the data!
Can use Pig to process, extract, filter the data
Can use Hive - a SQL like query language - to query my data
Why Hadoop slide content:
Even with primarily relational systems, it involved hundreds of sources
Getting any BI tool to connect to so many sources is … not fun
More times than not, we needed to understand a subset or aggregate of this data - not all of the data!
Can use Pig to process, extract, filter the data
Can use Hive - a SQL like query language - to query my data
Why Hadoop slide content:
Even with primarily relational systems, it involved hundreds of sources
Getting any BI tool to connect to so many sources is … not fun
More times than not, we needed to understand a subset or aggregate of this data - not all of the data!
Can use Pig to process, extract, filter the data
Can use Hive - a SQL like query language - to query my data