Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Mixing Analytic Workloads with Greenplum and Apache Spark

Apache Spark is a popular in-memory data analytics engine because of its speed, scalability, and ease of use. It also fits well with DevOps practices and cloud-native software platforms. It’s good for data exploration, interactive analytics, and streaming use cases.

However, Spark, like other data-processing platforms, is not one size fits all. Different versions of Spark support different feature sets, and Spark’s machine-learning libraries can also vary in important ways between versions, or may lack the right algorithm.

In this webinar, you’ll learn:

- How to integrate data warehouse workloads with Spark
- Which workloads are better for Greenplum and for Spark
- How to use the Greenplum-Spark connector

Presenter: Kong Yew Chan, Product Manager, Pivotal

  • Be the first to comment

Mixing Analytic Workloads with Greenplum and Apache Spark

  1. 1. © Copyright 2017 Pivotal Software, Inc. All rights Reserved. Mixing Analytic Workloads with Greenplum and Apache Spark Kong Yew, Chan Product Manager kochan@pivotal.io
  2. 2. Cover w/ Image Agenda ■ Apache Spark for analytic workloads ■ Mixing workloads with Greenplum and Spark ■ Using the Greenplum-Spark connector
  3. 3. Pivotal Data Suite Use Case Applied to Predictive Maintenance Analytical workloads are changing as businesses are demanding streaming and real-time processing
  4. 4. The Data Lake is Valuable, but not a Panacea • ACID-compliant transactions • Full ANSI SQL compliance • Immediate consistency vs eventual consistency • Hundreds or thousands of concurrent queries • Queries involving complex, multi-way joins requiring a sophisticated optimizer Many operations require the features of mature, relational MPP data platforms
  5. 5. Does Spark Replace the Data Warehouse? Spark is an in-memory processing system, complements with data warehouse Reasons: • In-memory processing • Memory limitations • Data Movement
  6. 6. What if we could leverage the best qualities of the data warehouse and the best qualities of Spark?
  7. 7. Why use Apache Spark for processing data ? Features: • 100x performance gain with in-memory analytical processing • SQL for structured data processing • Advanced analytics for machine learning, graph and streaming Use Cases: • Data exploration • Interactive analytics • Stream processing
  8. 8. Why use Greenplum for processing data ? Features: ● Process analytics for entire dataset (in-memory and disks) ● Provide full ANSI SQL for structured data processing ● Advanced analytics for machine learning(Madlib), graph, geospatial, text Use Cases: ● Large-scale data processing ● Advanced analytics for enterprise use cases
  9. 9. Mixing Analytic Workloads Best for Greenplum ● Analytics over the entire dataset ● Processing multi-structured data Best for Spark ● Limited data that fits Spark’s in- memory platform ● ETL processing (streaming, micro-batches) ● Data exploration
  10. 10. Pivotal Data Suite Use Case Applied to Predictive Maintenance Using the Greenplum-Spark connector
  11. 11. Use Case: Financial Services Parallel data transfer Financial risk algorithms MPP Database Use Cases: ● Analyzing financial risk Benefits: ● Faster in-memory processing ● Expand data processing to Spark GPDB-Spark connector Executor
  12. 12. Greenplum-Spark connector (GSC) High speed parallel data transfer between GPDB and Spark ● Easy to use ● Optimize for performance ● Complement with Spark ecosystem In-memory processingMPP database
  13. 13. Greenplum-Spark architecture ● Uses GPDB segments to transfer data to Spark executors ● Scale dynamically (Kubernetes, Yarn, Mesos) ● Support Spark programming languages (Python, Scala, Java, R)
  14. 14. Easy to use scala> :paste // Entering paste mode (ctrl-D to finish) val gscOptionMap = Map( "url" -> "jdbc:postgresql://gpmaster.domain/tutorial", "user" -> "user1", "password" -> "pivotal", "dbschema" -> "faa", "dbtable" -> "otp_c", "partitionColumn" -> "airlineid" ) val gpdf = spark.read.format("greenplum") .options(gscOptionMap) .load() // Exiting paste mode, now interpreting. gpdf: org.apache.spark.sql.DataFrame = [flt_year: smallint, flt_quarter: smallint ... 44 more fields]
  15. 15. Performance optimization (Column Projection) scala> paste: // Entering paste mode (ctrl-D to finish) scala> gpdf.select("origincityname", "flt_month", "airlineid", "carrier").show() control-D // Exiting paste mode, now interpreting. +---------------+---------+---------+-------+ | origincityname|flt_month|airlineid|carrier| +---------------+---------+---------+-------+ | Detroit, MI| 12| 19386| NW| | Houston, TX| 12| 19704| CO| | Houston, TX| 12| 19704| CO| …. +--------------------+---------+---------+-------+ only showing top 20 rows
  16. 16. Performance optimization (Predicate Push down) scala> paste: // Entering paste mode (ctrl-D to finish) scala> gpdf.select("origincityname", "flt_month", "airlineid", "carrier") .filter("cancelled = 1").filter("flt_month = 12") .orderBy("airlineid", "origincityname") .show() control-D // Exiting paste mode, now interpreting. +---------------+---------+---------+-------+ | origincityname|flt_month|airlineid|carrier| +---------------+---------+---------+-------+ | Detroit, MI| 12| 19386| NW| | Houston, TX| 12| 19704| CO| ... +--------------------+---------+---------+-------+ only showing top 20 rows
  17. 17. Benefits of the Greenplum Spark connector ● Faster data transfer between GPDB and Spark (75x faster than JDBC connector) ● Easy to use ● Performance (Column projection, Predicate push down)
  18. 18. Cover w/ Image Key Takeaways ● Use mixed workloads for both Greenplum and Spark ● Leverage both the Greenplum and Spark ecosystems
  19. 19. Start Your Journey Today! Pivotal Greenplum and Spark Connector pivotal.io/pivotal-greenplum greenplum-spark.docs.pivotal.io Pivotal Data Science pivotal.io/data-science Apache MADlib madlib.apache.org Greenplum Database Channel
  20. 20. © Copyright 2017 Pivotal Software, Inc. All rights Reserved. Questions? Contact kochan@pivotal.io Thank you for attending!

×