SlideShare a Scribd company logo
1 of 47
Download to read offline
넷플릭스 빅데이터 플랫폼
아파치 스팍
통합 경험기
이름: 박철수
소속: 넷플릭스
Outline
1.  Netflix big data platform
2.  Spark @ Netflix
3.  Dynamic allocation
4.  Predicate pushdown
5.  S3 file listing
6.  S3 insert overwrite
7.  Zeppelin, Ipython notebooks
8.  Use case (Pig vs. Spark)
Netflix
Big Data Platform
Netflix data pipeline
Cloud
Apps
S3
Suro/Kafka Ursula
SSTables
Cassandra Aegisthus
Event Data
500 bn/day, 15m
Daily
Dimension Data
Netflix big data platform
Data
Warehouse
Service
Tools
Gateways
Prod
Clients
Clusters
Adhoc Prod TestTest
Big Data API/Portal
Metacat
Prod
Our use cases
•  Batch jobs (Pig, Hive)
•  ETL jobs
•  Reporting and other analysis
•  Interactive jobs (Presto)
•  Iterative ML jobs (Spark)
Spark @ Netflix
Mix of deployments
•  Spark on Mesos
•  Self-serving AMI
•  Full BDAS (Berkeley Data Analytics Stack)
•  Online streaming analytics
•  Spark on YARN
•  Spark as a service
•  YARN application on EMR Hadoop
•  Offline batch analytics
Spark on YARN
•  Multi-tenant cluster in AWS cloud
•  Hosting MR, Spark, Druid
•  EMR Hadoop 2.4 (AMI 3.9.0)
•  D2.4xlarge ec2 instance type
•  1000+ nodes (100TB+ total memory)
Deployment
S3 s3://bucket/spark/1.5/spark-1.5.tgz, spark-defaults.conf (spark.yarn.jar=1440443677)
s3://bucket/spark/1.4/spark-1.4.tgz, spark-defaults.conf (spark.yarn.jar=1440304023)
/spark/1.5/1440443677/spark-assembly.jar
/spark/1.5/1440720326/spark-assembly.jar
/spark/1.4/1440304023/spark-assembly.jar
/spark/1.4/1440989711/spark-assembly.jar
name: spark
version: 1.5
tags: ['type:spark', 'ver:1.5']
jars:
- 's3://bucket/spark/1.5/spark-1.5.tgz’
Download latest tarball
From S3 via Genie
Goals
1. Automate deployment.
2. Support multiple versions.
3. Deploy new code in 15 minutes.
4. Roll back bad code in less than a minute.
Dynamic
Allocation
Dynamic allocation
// spark-defaults.conf
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.executorIdleTimeout 5
spark.dynamicAllocation.initialExecutors 3
spark.dynamicAllocation.maxExecutors 500
spark.dynamicAllocation.minExecutors 3
spark.dynamicAllocation.schedulerBacklogTimeout 5
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 5
spark.dynamicAllocation.cachedExecutorIdleTimeout 900
// yarn-site.xml
yarn.nodemanager.aux-services
•  spark_shuffle, mapreduce_shuffle
yarn.nodemanager.aux-services.spark_shuffle.class
•  org.apache.spark.network.yarn.YarnShuffleService
SPARK-6954
SPARK-7955
val data = sqlContext
.table("dse.admin_genie_job_d”)
.filter($"dateint">=20150601 and $"dateint"<=20150830)
data.persist
data.count
SPARK-8167
•  Symptom
•  Spark tasks randomly fail with “executor lost” error.
•  Cause
•  YARN preemption is not graceful.
•  Solution
•  Preempted tasks shouldn’t be counted as failures but should be retried.
YARN-2730
•  Symptom
•  MR jobs get timed out during localization when running with Spark jobs
on the same cluster.
•  Cause
•  NM localizes one job at a time. Since Spark runtime jar is big, localizing
Spark jobs may take long, blocking MR jobs.
•  Solution
•  Stage Spark runtime jar on HDFS with high repliacation.
•  Make NM localize multiple jobs concurrently.
Predicate
Pushdown
Predicate pushdown
Case Behavior
Predicates with partition cols on partitioned table Single partition scan
Predicates with partition and non-partition cols on
partitioned table
Single partition scan
No predicate on partitioned table
e.g. sqlContext.table(“nccp_log”).take(10)
Full scan
No predicate on non-partitioned table Single partition scan
Predicate pushdown for metadata
Analyzer
Optimizer
SparkPlanner
Parser
HiveMetastoreCatalog
getAllPartitions()
ResolveRelation
What if your table has 1.6M partitions?
SPARK-6910
•  Symptom
•  Querying against heavily partitioned Hive table is slow.
•  Cause
•  Predicates are not pushed down into Hive metastore, so Spark does full
scan for table metadata.
•  Solution
•  Push down binary comparison expressions via getPartitionsByfilter() in
to Hive metastore.
Predicate pushdown for metadata
Analyzer
Optimizer
SparkPlanner
Parser
HiveTableScan
getPartitionsByFilter()
HiveTableScans
S3
File Listing
Input split computation
•  mapreduce.input.fileinputformat.list-status.num-threads
•  The number of threads to use list and fetch block locations for the specifi
ed input paths.
•  Setting this property in Spark jobs doesn’t help.
File listing for partitioned table
Partition path
Seq[RDD]
HadoopRDD
HadoopRDD
HadoopRDD
HadoopRDD
Partition path
Partition path
Partition path
Input dir
Input dir
Input dir
Input dir
Sequentially listing input dirs via S3N file system.
S3N
S3N
S3N
S3N
SPARK-9926, SPARK-10340
•  Symptom
•  Input split computation for partitioned Hive table on S3 is slow.
•  Cause
•  Listing files on a per partition basis is slow.
•  S3N file system computes data locality hints.
•  Solution
•  Bulk list partitions in parallel using AmazonS3Client.
•  Bypass data locality computation for S3 objects.
S3 bulk listing
Partition path
ParArray[RDD]
HadoopRDD
HadoopRDD
HadoopRDD
HadoopRDD
Partition path
Partition path
Partition path
Input dir
Input dir
Input dir
Input dir
Bulk listing input dirs in parallel via AmazonS3Client.
Amazon
S3Client
Performance improvement
0
2000
4000
6000
8000
10000
12000
14000
16000
1 24 240 720
seconds
# of partitions
1.5 RC2
S3 bulk listing
SELECT * FROM nccp_log WHERE dateint=20150801 and hour=0 LIMIT 10;
S3
Insert Overwrite
Hadoop output committer
•  How it works:
•  Each task writes output to a temp dir.
•  Output committer renames first successful task’s temp dir to fin
al destination.
•  Problems with S3:
•  S3 rename is copy and delete.
•  S3 is eventual consistent.
•  FileNotFoundException during “rename.”
S3 output committer
•  How it works:
•  Each task writes output to local disk.
•  Output committer copies first successful task’s output to S3.
•  Advantages:
•  Avoid redanant S3 copy.
•  Avoid eventual consistency.
Hive insert overwrite
•  How it works:
•  Delete and rewrite existing output in partitions.
•  Problems with S3:
•  S3 is eventual consistent.
•  FileAlreadyExistException during “rewrite.”
Batchid pattern
•  How it works:
•  Never delete existing output in partitions.
•  Each job inserts a unique subpartition called “batchid.”
•  Advantages:
•  Avoid eventual consistency.
Zeppelin
Ipython
Notebooks
Big data portal
•  One stop shop for all big data related tools and services.
•  Built on top of Big Data API.
Out of box examples
•  Zero installation
•  Dependency management via Docker
•  Notebook persistence
•  Elastic resources
On demand notebooks
Notebook Infrastructure
Quick facts about Titan
•  Task execution platform leveraging Apache Mesos.
•  Manages underlying EC2 instances.
•  Process supervision and uptime in the face of failures.
•  Auto scaling.
Ephemeral ports / --net=host mode
Zeppelin
Docker
Container A
172.X.X.X
Host machine A
54.X.X.X
Host machine B
54.X.X.X
Pyspark
Docker
Container B
172.X.X.X
Titan cluster YARN cluster
Spark AM
Spark AM
Use Case
Pig vs. Spark
Iterative job
Iterative job
1. Duplicate data and aggregate them differently.
2. Merging aggregates back.
Performance improvement
0:00:00
0:14:24
0:28:48
0:43:12
0:57:36
1:12:00
1:26:24
1:40:48
1:55:12
2:09:36
job 1 job 2 job 3
hh:mm:ss
Pig
Spark
Our contributions
SPARK-6018
SPARK-6662
SPARK-6909
SPARK-6910
SPARK-7037
SPARK-7451
SPARK-7850
SPARK-8355
SPARK-8572
SPARK-8908
SPARK-9270
SPARK-9926
SPARK-10001
SPARK-10340
Q&A
Thank You

More Related Content

What's hot

Real-Time Streaming with Apache Spark Streaming and Apache Storm
Real-Time Streaming with Apache Spark Streaming and Apache StormReal-Time Streaming with Apache Spark Streaming and Apache Storm
Real-Time Streaming with Apache Spark Streaming and Apache StormDavorin Vukelic
 
Digdagによる大規模データ処理の自動化とエラー処理
Digdagによる大規模データ処理の自動化とエラー処理Digdagによる大規模データ処理の自動化とエラー処理
Digdagによる大規模データ処理の自動化とエラー処理Sadayuki Furuhashi
 
CMUデータベース輪読会第8回
CMUデータベース輪読会第8回CMUデータベース輪読会第8回
CMUデータベース輪読会第8回Keisuke Suzuki
 
Introduction to Twitter Storm
Introduction to Twitter StormIntroduction to Twitter Storm
Introduction to Twitter StormUwe Printz
 
Fluentd at Bay Area Kubernetes Meetup
Fluentd at Bay Area Kubernetes MeetupFluentd at Bay Area Kubernetes Meetup
Fluentd at Bay Area Kubernetes MeetupSadayuki Furuhashi
 
Realtime Analytics with Storm and Hadoop
Realtime Analytics with Storm and HadoopRealtime Analytics with Storm and Hadoop
Realtime Analytics with Storm and HadoopDataWorks Summit
 
Effective testing for spark programs Strata NY 2015
Effective testing for spark programs   Strata NY 2015Effective testing for spark programs   Strata NY 2015
Effective testing for spark programs Strata NY 2015Holden Karau
 
Distributed real time stream processing- why and how
Distributed real time stream processing- why and howDistributed real time stream processing- why and how
Distributed real time stream processing- why and howPetr Zapletal
 
Hadoop Summit Europe 2014: Apache Storm Architecture
Hadoop Summit Europe 2014: Apache Storm ArchitectureHadoop Summit Europe 2014: Apache Storm Architecture
Hadoop Summit Europe 2014: Apache Storm ArchitectureP. Taylor Goetz
 
Developing Java Streaming Applications with Apache Storm
Developing Java Streaming Applications with Apache StormDeveloping Java Streaming Applications with Apache Storm
Developing Java Streaming Applications with Apache StormLester Martin
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm Chandler Huang
 
Natural Language Processing with CNTK and Apache Spark with Ali Zaidi
Natural Language Processing with CNTK and Apache Spark with Ali ZaidiNatural Language Processing with CNTK and Apache Spark with Ali Zaidi
Natural Language Processing with CNTK and Apache Spark with Ali ZaidiDatabricks
 
Fighting Against Chaotically Separated Values with Embulk
Fighting Against Chaotically Separated Values with EmbulkFighting Against Chaotically Separated Values with Embulk
Fighting Against Chaotically Separated Values with EmbulkSadayuki Furuhashi
 
Cassandra and Storm at Health Market Sceince
Cassandra and Storm at Health Market SceinceCassandra and Storm at Health Market Sceince
Cassandra and Storm at Health Market SceinceP. Taylor Goetz
 
Building a near real time search engine & analytics for logs using solr
Building a near real time search engine & analytics for logs using solrBuilding a near real time search engine & analytics for logs using solr
Building a near real time search engine & analytics for logs using solrlucenerevolution
 
GPUs in Big Data - StampedeCon 2014
GPUs in Big Data - StampedeCon 2014GPUs in Big Data - StampedeCon 2014
GPUs in Big Data - StampedeCon 2014StampedeCon
 
Apache Storm 0.9 basic training - Verisign
Apache Storm 0.9 basic training - VerisignApache Storm 0.9 basic training - Verisign
Apache Storm 0.9 basic training - VerisignMichael Noll
 
Integrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationsIntegrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationslucenerevolution
 
Multi-Tenant Storm Service on Hadoop Grid
Multi-Tenant Storm Service on Hadoop GridMulti-Tenant Storm Service on Hadoop Grid
Multi-Tenant Storm Service on Hadoop GridDataWorks Summit
 

What's hot (20)

Real-Time Streaming with Apache Spark Streaming and Apache Storm
Real-Time Streaming with Apache Spark Streaming and Apache StormReal-Time Streaming with Apache Spark Streaming and Apache Storm
Real-Time Streaming with Apache Spark Streaming and Apache Storm
 
Digdagによる大規模データ処理の自動化とエラー処理
Digdagによる大規模データ処理の自動化とエラー処理Digdagによる大規模データ処理の自動化とエラー処理
Digdagによる大規模データ処理の自動化とエラー処理
 
CMUデータベース輪読会第8回
CMUデータベース輪読会第8回CMUデータベース輪読会第8回
CMUデータベース輪読会第8回
 
Introduction to Twitter Storm
Introduction to Twitter StormIntroduction to Twitter Storm
Introduction to Twitter Storm
 
Fluentd at Bay Area Kubernetes Meetup
Fluentd at Bay Area Kubernetes MeetupFluentd at Bay Area Kubernetes Meetup
Fluentd at Bay Area Kubernetes Meetup
 
Realtime Analytics with Storm and Hadoop
Realtime Analytics with Storm and HadoopRealtime Analytics with Storm and Hadoop
Realtime Analytics with Storm and Hadoop
 
Cascalog internal dsl_preso
Cascalog internal dsl_presoCascalog internal dsl_preso
Cascalog internal dsl_preso
 
Effective testing for spark programs Strata NY 2015
Effective testing for spark programs   Strata NY 2015Effective testing for spark programs   Strata NY 2015
Effective testing for spark programs Strata NY 2015
 
Distributed real time stream processing- why and how
Distributed real time stream processing- why and howDistributed real time stream processing- why and how
Distributed real time stream processing- why and how
 
Hadoop Summit Europe 2014: Apache Storm Architecture
Hadoop Summit Europe 2014: Apache Storm ArchitectureHadoop Summit Europe 2014: Apache Storm Architecture
Hadoop Summit Europe 2014: Apache Storm Architecture
 
Developing Java Streaming Applications with Apache Storm
Developing Java Streaming Applications with Apache StormDeveloping Java Streaming Applications with Apache Storm
Developing Java Streaming Applications with Apache Storm
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
 
Natural Language Processing with CNTK and Apache Spark with Ali Zaidi
Natural Language Processing with CNTK and Apache Spark with Ali ZaidiNatural Language Processing with CNTK and Apache Spark with Ali Zaidi
Natural Language Processing with CNTK and Apache Spark with Ali Zaidi
 
Fighting Against Chaotically Separated Values with Embulk
Fighting Against Chaotically Separated Values with EmbulkFighting Against Chaotically Separated Values with Embulk
Fighting Against Chaotically Separated Values with Embulk
 
Cassandra and Storm at Health Market Sceince
Cassandra and Storm at Health Market SceinceCassandra and Storm at Health Market Sceince
Cassandra and Storm at Health Market Sceince
 
Building a near real time search engine & analytics for logs using solr
Building a near real time search engine & analytics for logs using solrBuilding a near real time search engine & analytics for logs using solr
Building a near real time search engine & analytics for logs using solr
 
GPUs in Big Data - StampedeCon 2014
GPUs in Big Data - StampedeCon 2014GPUs in Big Data - StampedeCon 2014
GPUs in Big Data - StampedeCon 2014
 
Apache Storm 0.9 basic training - Verisign
Apache Storm 0.9 basic training - VerisignApache Storm 0.9 basic training - Verisign
Apache Storm 0.9 basic training - Verisign
 
Integrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applicationsIntegrate Solr with real-time stream processing applications
Integrate Solr with real-time stream processing applications
 
Multi-Tenant Storm Service on Hadoop Grid
Multi-Tenant Storm Service on Hadoop GridMulti-Tenant Storm Service on Hadoop Grid
Multi-Tenant Storm Service on Hadoop Grid
 

Viewers also liked

[264] large scale deep-learning_on_spark
[264] large scale deep-learning_on_spark[264] large scale deep-learning_on_spark
[264] large scale deep-learning_on_sparkNAVER D2
 
[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기NAVER D2
 
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기NAVER D2
 
[252] 증분 처리 플랫폼 cana 개발기
[252] 증분 처리 플랫폼 cana 개발기[252] 증분 처리 플랫폼 cana 개발기
[252] 증분 처리 플랫폼 cana 개발기NAVER D2
 
[242] wifi를 이용한 실내 장소 인식하기
[242] wifi를 이용한 실내 장소 인식하기[242] wifi를 이용한 실내 장소 인식하기
[242] wifi를 이용한 실내 장소 인식하기NAVER D2
 
[253] apache ni fi
[253] apache ni fi[253] apache ni fi
[253] apache ni fiNAVER D2
 
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현NAVER D2
 
[251] implementing deep learning using cu dnn
[251] implementing deep learning using cu dnn[251] implementing deep learning using cu dnn
[251] implementing deep learning using cu dnnNAVER D2
 
[223] h base consistent secondary indexing
[223] h base consistent secondary indexing[223] h base consistent secondary indexing
[223] h base consistent secondary indexingNAVER D2
 
[231] the simplicity of cluster apps with circuit
[231] the simplicity of cluster apps with circuit[231] the simplicity of cluster apps with circuit
[231] the simplicity of cluster apps with circuitNAVER D2
 
[232] 수퍼컴퓨팅과 데이터 어낼리틱스
[232] 수퍼컴퓨팅과 데이터 어낼리틱스[232] 수퍼컴퓨팅과 데이터 어낼리틱스
[232] 수퍼컴퓨팅과 데이터 어낼리틱스NAVER D2
 
[224] 번역 모델 기반_질의_교정_시스템
[224] 번역 모델 기반_질의_교정_시스템[224] 번역 모델 기반_질의_교정_시스템
[224] 번역 모델 기반_질의_교정_시스템NAVER D2
 
[263] s2graph large-scale-graph-database-with-hbase-2
[263] s2graph large-scale-graph-database-with-hbase-2[263] s2graph large-scale-graph-database-with-hbase-2
[263] s2graph large-scale-graph-database-with-hbase-2NAVER D2
 
[243] turning data into value
[243] turning data into value[243] turning data into value
[243] turning data into valueNAVER D2
 
[244] 분산 환경에서 스트림과 배치 처리 통합 모델
[244] 분산 환경에서 스트림과 배치 처리 통합 모델[244] 분산 환경에서 스트림과 배치 처리 통합 모델
[244] 분산 환경에서 스트림과 배치 처리 통합 모델NAVER D2
 
[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtosNAVER D2
 
[212] large scale backend service develpment
[212] large scale backend service develpment[212] large scale backend service develpment
[212] large scale backend service develpmentNAVER D2
 
[211] 네이버 검색과 데이터마이닝
[211] 네이버 검색과 데이터마이닝[211] 네이버 검색과 데이터마이닝
[211] 네이버 검색과 데이터마이닝NAVER D2
 
[214] data science with apache zeppelin
[214] data science with apache zeppelin[214] data science with apache zeppelin
[214] data science with apache zeppelinNAVER D2
 
[222]대화 시스템 서비스 동향 및 개발 방법
[222]대화 시스템 서비스 동향 및 개발 방법[222]대화 시스템 서비스 동향 및 개발 방법
[222]대화 시스템 서비스 동향 및 개발 방법NAVER D2
 

Viewers also liked (20)

[264] large scale deep-learning_on_spark
[264] large scale deep-learning_on_spark[264] large scale deep-learning_on_spark
[264] large scale deep-learning_on_spark
 
[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기[261] 실시간 추천엔진 머신한대에 구겨넣기
[261] 실시간 추천엔진 머신한대에 구겨넣기
 
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기
[234] 산업 현장을 위한 증강 현실 기기 daqri helmet 개발기
 
[252] 증분 처리 플랫폼 cana 개발기
[252] 증분 처리 플랫폼 cana 개발기[252] 증분 처리 플랫폼 cana 개발기
[252] 증분 처리 플랫폼 cana 개발기
 
[242] wifi를 이용한 실내 장소 인식하기
[242] wifi를 이용한 실내 장소 인식하기[242] wifi를 이용한 실내 장소 인식하기
[242] wifi를 이용한 실내 장소 인식하기
 
[253] apache ni fi
[253] apache ni fi[253] apache ni fi
[253] apache ni fi
 
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현
[241] Storm과 Elasticsearch를 활용한 로깅 플랫폼의 실시간 알람 시스템 구현
 
[251] implementing deep learning using cu dnn
[251] implementing deep learning using cu dnn[251] implementing deep learning using cu dnn
[251] implementing deep learning using cu dnn
 
[223] h base consistent secondary indexing
[223] h base consistent secondary indexing[223] h base consistent secondary indexing
[223] h base consistent secondary indexing
 
[231] the simplicity of cluster apps with circuit
[231] the simplicity of cluster apps with circuit[231] the simplicity of cluster apps with circuit
[231] the simplicity of cluster apps with circuit
 
[232] 수퍼컴퓨팅과 데이터 어낼리틱스
[232] 수퍼컴퓨팅과 데이터 어낼리틱스[232] 수퍼컴퓨팅과 데이터 어낼리틱스
[232] 수퍼컴퓨팅과 데이터 어낼리틱스
 
[224] 번역 모델 기반_질의_교정_시스템
[224] 번역 모델 기반_질의_교정_시스템[224] 번역 모델 기반_질의_교정_시스템
[224] 번역 모델 기반_질의_교정_시스템
 
[263] s2graph large-scale-graph-database-with-hbase-2
[263] s2graph large-scale-graph-database-with-hbase-2[263] s2graph large-scale-graph-database-with-hbase-2
[263] s2graph large-scale-graph-database-with-hbase-2
 
[243] turning data into value
[243] turning data into value[243] turning data into value
[243] turning data into value
 
[244] 분산 환경에서 스트림과 배치 처리 통합 모델
[244] 분산 환경에서 스트림과 배치 처리 통합 모델[244] 분산 환경에서 스트림과 배치 처리 통합 모델
[244] 분산 환경에서 스트림과 배치 처리 통합 모델
 
[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos[233] level 2 network programming using packet ngin rtos
[233] level 2 network programming using packet ngin rtos
 
[212] large scale backend service develpment
[212] large scale backend service develpment[212] large scale backend service develpment
[212] large scale backend service develpment
 
[211] 네이버 검색과 데이터마이닝
[211] 네이버 검색과 데이터마이닝[211] 네이버 검색과 데이터마이닝
[211] 네이버 검색과 데이터마이닝
 
[214] data science with apache zeppelin
[214] data science with apache zeppelin[214] data science with apache zeppelin
[214] data science with apache zeppelin
 
[222]대화 시스템 서비스 동향 및 개발 방법
[222]대화 시스템 서비스 동향 및 개발 방법[222]대화 시스템 서비스 동향 및 개발 방법
[222]대화 시스템 서비스 동향 및 개발 방법
 

Similar to [262] netflix 빅데이터 플랫폼

Apache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupApache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupNed Shawa
 
Running Presto and Spark on the Netflix Big Data Platform
Running Presto and Spark on the Netflix Big Data PlatformRunning Presto and Spark on the Netflix Big Data Platform
Running Presto and Spark on the Netflix Big Data PlatformEva Tse
 
(BDT303) Running Spark and Presto on the Netflix Big Data Platform
(BDT303) Running Spark and Presto on the Netflix Big Data Platform(BDT303) Running Spark and Presto on the Netflix Big Data Platform
(BDT303) Running Spark and Presto on the Netflix Big Data PlatformAmazon Web Services
 
Transactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangTransactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangDatabricks
 
Leveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark PipelinesLeveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark PipelinesRose Toomey
 
Leveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelinesLeveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelinesRose Toomey
 
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015Mike Broberg
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupRafal Kwasny
 
Big Telco - Yousun Jeong
Big Telco - Yousun JeongBig Telco - Yousun Jeong
Big Telco - Yousun JeongSpark Summit
 
Big Telco Real-Time Network Analytics
Big Telco Real-Time Network AnalyticsBig Telco Real-Time Network Analytics
Big Telco Real-Time Network AnalyticsYousun Jeong
 
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Example
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life ExampleKafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Example
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Exampleconfluent
 
Spark Summit EU talk by Kent Buenaventura and Willaim Lau
Spark Summit EU talk by Kent Buenaventura and Willaim LauSpark Summit EU talk by Kent Buenaventura and Willaim Lau
Spark Summit EU talk by Kent Buenaventura and Willaim LauSpark Summit
 
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...Databricks
 
Apache Spark Tutorial
Apache Spark TutorialApache Spark Tutorial
Apache Spark TutorialAhmet Bulut
 
Headaches and Breakthroughs in Building Continuous Applications
Headaches and Breakthroughs in Building Continuous ApplicationsHeadaches and Breakthroughs in Building Continuous Applications
Headaches and Breakthroughs in Building Continuous ApplicationsDatabricks
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsDatabricks
 
Apache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data ProcessingApache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data Processingprajods
 
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreOracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreDataWorks Summit
 
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...Landon Robinson
 

Similar to [262] netflix 빅데이터 플랫폼 (20)

Apache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetupApache spark-melbourne-april-2015-meetup
Apache spark-melbourne-april-2015-meetup
 
Running Presto and Spark on the Netflix Big Data Platform
Running Presto and Spark on the Netflix Big Data PlatformRunning Presto and Spark on the Netflix Big Data Platform
Running Presto and Spark on the Netflix Big Data Platform
 
(BDT303) Running Spark and Presto on the Netflix Big Data Platform
(BDT303) Running Spark and Presto on the Netflix Big Data Platform(BDT303) Running Spark and Presto on the Netflix Big Data Platform
(BDT303) Running Spark and Presto on the Netflix Big Data Platform
 
20170126 big data processing
20170126 big data processing20170126 big data processing
20170126 big data processing
 
Transactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric LiangTransactional writes to cloud storage with Eric Liang
Transactional writes to cloud storage with Eric Liang
 
Leveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark PipelinesLeveraging Databricks for Spark Pipelines
Leveraging Databricks for Spark Pipelines
 
Leveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelinesLeveraging Databricks for Spark pipelines
Leveraging Databricks for Spark pipelines
 
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015
Apache Spark™ + IBM Watson + Twitter DataPalooza SF 2015
 
ETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetupETL with SPARK - First Spark London meetup
ETL with SPARK - First Spark London meetup
 
Big Telco - Yousun Jeong
Big Telco - Yousun JeongBig Telco - Yousun Jeong
Big Telco - Yousun Jeong
 
Big Telco Real-Time Network Analytics
Big Telco Real-Time Network AnalyticsBig Telco Real-Time Network Analytics
Big Telco Real-Time Network Analytics
 
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Example
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life ExampleKafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Example
Kafka Summit NYC 2017 Introduction to Kafka Streams with a Real-life Example
 
Spark Summit EU talk by Kent Buenaventura and Willaim Lau
Spark Summit EU talk by Kent Buenaventura and Willaim LauSpark Summit EU talk by Kent Buenaventura and Willaim Lau
Spark Summit EU talk by Kent Buenaventura and Willaim Lau
 
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
Lessons from the Field: Applying Best Practices to Your Apache Spark Applicat...
 
Apache Spark Tutorial
Apache Spark TutorialApache Spark Tutorial
Apache Spark Tutorial
 
Headaches and Breakthroughs in Building Continuous Applications
Headaches and Breakthroughs in Building Continuous ApplicationsHeadaches and Breakthroughs in Building Continuous Applications
Headaches and Breakthroughs in Building Continuous Applications
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Apache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data ProcessingApache Spark: The Next Gen toolset for Big Data Processing
Apache Spark: The Next Gen toolset for Big Data Processing
 
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive MetastoreOracleStore: A Highly Performant RawStore Implementation for Hive Metastore
OracleStore: A Highly Performant RawStore Implementation for Hive Metastore
 
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
Spark + AI Summit 2019: Headaches and Breakthroughs in Building Continuous Ap...
 

More from NAVER D2

[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다NAVER D2
 
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...NAVER D2
 
[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기NAVER D2
 
[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발NAVER D2
 
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈NAVER D2
 
[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&ANAVER D2
 
[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기NAVER D2
 
[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep LearningNAVER D2
 
[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applicationsNAVER D2
 
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load BalancingOld version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load BalancingNAVER D2
 
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지NAVER D2
 
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기NAVER D2
 
[224]네이버 검색과 개인화
[224]네이버 검색과 개인화[224]네이버 검색과 개인화
[224]네이버 검색과 개인화NAVER D2
 
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)NAVER D2
 
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기NAVER D2
 
[213] Fashion Visual Search
[213] Fashion Visual Search[213] Fashion Visual Search
[213] Fashion Visual SearchNAVER D2
 
[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화NAVER D2
 
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지NAVER D2
 
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터NAVER D2
 
[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?NAVER D2
 

More from NAVER D2 (20)

[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다[211] 인공지능이 인공지능 챗봇을 만든다
[211] 인공지능이 인공지능 챗봇을 만든다
 
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
[233] 대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing: Maglev Hashing Scheduler i...
 
[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기[215] Druid로 쉽고 빠르게 데이터 분석하기
[215] Druid로 쉽고 빠르게 데이터 분석하기
 
[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발[245]Papago Internals: 모델분석과 응용기술 개발
[245]Papago Internals: 모델분석과 응용기술 개발
 
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
[236] 스트림 저장소 최적화 이야기: 아파치 드루이드로부터 얻은 교훈
 
[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A[235]Wikipedia-scale Q&A
[235]Wikipedia-scale Q&A
 
[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기[244]로봇이 현실 세계에 대해 학습하도록 만들기
[244]로봇이 현실 세계에 대해 학습하도록 만들기
 
[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning[243] Deep Learning to help student’s Deep Learning
[243] Deep Learning to help student’s Deep Learning
 
[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications[234]Fast & Accurate Data Annotation Pipeline for AI applications
[234]Fast & Accurate Data Annotation Pipeline for AI applications
 
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load BalancingOld version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
Old version: [233]대형 컨테이너 클러스터에서의 고가용성 Network Load Balancing
 
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
[226]NAVER 광고 deep click prediction: 모델링부터 서빙까지
 
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
[225]NSML: 머신러닝 플랫폼 서비스하기 & 모델 튜닝 자동화하기
 
[224]네이버 검색과 개인화
[224]네이버 검색과 개인화[224]네이버 검색과 개인화
[224]네이버 검색과 개인화
 
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
[216]Search Reliability Engineering (부제: 지진에도 흔들리지 않는 네이버 검색시스템)
 
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
[214] Ai Serving Platform: 하루 수 억 건의 인퍼런스를 처리하기 위한 고군분투기
 
[213] Fashion Visual Search
[213] Fashion Visual Search[213] Fashion Visual Search
[213] Fashion Visual Search
 
[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화[232] TensorRT를 활용한 딥러닝 Inference 최적화
[232] TensorRT를 활용한 딥러닝 Inference 최적화
 
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
[242]컴퓨터 비전을 이용한 실내 지도 자동 업데이트 방법: 딥러닝을 통한 POI 변화 탐지
 
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
[212]C3, 데이터 처리에서 서빙까지 가능한 하둡 클러스터
 
[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?[223]기계독해 QA: 검색인가, NLP인가?
[223]기계독해 QA: 검색인가, NLP인가?
 

Recently uploaded

H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DaySri Ambati
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 

Recently uploaded (20)

H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo DayH2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
H2O.ai CEO/Founder: Sri Ambati Keynote at Wells Fargo Day
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 

[262] netflix 빅데이터 플랫폼

  • 1. 넷플릭스 빅데이터 플랫폼 아파치 스팍 통합 경험기 이름: 박철수 소속: 넷플릭스
  • 2. Outline 1.  Netflix big data platform 2.  Spark @ Netflix 3.  Dynamic allocation 4.  Predicate pushdown 5.  S3 file listing 6.  S3 insert overwrite 7.  Zeppelin, Ipython notebooks 8.  Use case (Pig vs. Spark)
  • 4. Netflix data pipeline Cloud Apps S3 Suro/Kafka Ursula SSTables Cassandra Aegisthus Event Data 500 bn/day, 15m Daily Dimension Data
  • 5. Netflix big data platform Data Warehouse Service Tools Gateways Prod Clients Clusters Adhoc Prod TestTest Big Data API/Portal Metacat Prod
  • 6. Our use cases •  Batch jobs (Pig, Hive) •  ETL jobs •  Reporting and other analysis •  Interactive jobs (Presto) •  Iterative ML jobs (Spark)
  • 8. Mix of deployments •  Spark on Mesos •  Self-serving AMI •  Full BDAS (Berkeley Data Analytics Stack) •  Online streaming analytics •  Spark on YARN •  Spark as a service •  YARN application on EMR Hadoop •  Offline batch analytics
  • 9. Spark on YARN •  Multi-tenant cluster in AWS cloud •  Hosting MR, Spark, Druid •  EMR Hadoop 2.4 (AMI 3.9.0) •  D2.4xlarge ec2 instance type •  1000+ nodes (100TB+ total memory)
  • 10. Deployment S3 s3://bucket/spark/1.5/spark-1.5.tgz, spark-defaults.conf (spark.yarn.jar=1440443677) s3://bucket/spark/1.4/spark-1.4.tgz, spark-defaults.conf (spark.yarn.jar=1440304023) /spark/1.5/1440443677/spark-assembly.jar /spark/1.5/1440720326/spark-assembly.jar /spark/1.4/1440304023/spark-assembly.jar /spark/1.4/1440989711/spark-assembly.jar name: spark version: 1.5 tags: ['type:spark', 'ver:1.5'] jars: - 's3://bucket/spark/1.5/spark-1.5.tgz’ Download latest tarball From S3 via Genie
  • 11. Goals 1. Automate deployment. 2. Support multiple versions. 3. Deploy new code in 15 minutes. 4. Roll back bad code in less than a minute.
  • 13. Dynamic allocation // spark-defaults.conf spark.dynamicAllocation.enabled true spark.dynamicAllocation.executorIdleTimeout 5 spark.dynamicAllocation.initialExecutors 3 spark.dynamicAllocation.maxExecutors 500 spark.dynamicAllocation.minExecutors 3 spark.dynamicAllocation.schedulerBacklogTimeout 5 spark.dynamicAllocation.sustainedSchedulerBacklogTimeout 5 spark.dynamicAllocation.cachedExecutorIdleTimeout 900 // yarn-site.xml yarn.nodemanager.aux-services •  spark_shuffle, mapreduce_shuffle yarn.nodemanager.aux-services.spark_shuffle.class •  org.apache.spark.network.yarn.YarnShuffleService
  • 15. SPARK-7955 val data = sqlContext .table("dse.admin_genie_job_d”) .filter($"dateint">=20150601 and $"dateint"<=20150830) data.persist data.count
  • 16. SPARK-8167 •  Symptom •  Spark tasks randomly fail with “executor lost” error. •  Cause •  YARN preemption is not graceful. •  Solution •  Preempted tasks shouldn’t be counted as failures but should be retried.
  • 17. YARN-2730 •  Symptom •  MR jobs get timed out during localization when running with Spark jobs on the same cluster. •  Cause •  NM localizes one job at a time. Since Spark runtime jar is big, localizing Spark jobs may take long, blocking MR jobs. •  Solution •  Stage Spark runtime jar on HDFS with high repliacation. •  Make NM localize multiple jobs concurrently.
  • 19. Predicate pushdown Case Behavior Predicates with partition cols on partitioned table Single partition scan Predicates with partition and non-partition cols on partitioned table Single partition scan No predicate on partitioned table e.g. sqlContext.table(“nccp_log”).take(10) Full scan No predicate on non-partitioned table Single partition scan
  • 20. Predicate pushdown for metadata Analyzer Optimizer SparkPlanner Parser HiveMetastoreCatalog getAllPartitions() ResolveRelation What if your table has 1.6M partitions?
  • 21. SPARK-6910 •  Symptom •  Querying against heavily partitioned Hive table is slow. •  Cause •  Predicates are not pushed down into Hive metastore, so Spark does full scan for table metadata. •  Solution •  Push down binary comparison expressions via getPartitionsByfilter() in to Hive metastore.
  • 22. Predicate pushdown for metadata Analyzer Optimizer SparkPlanner Parser HiveTableScan getPartitionsByFilter() HiveTableScans
  • 24. Input split computation •  mapreduce.input.fileinputformat.list-status.num-threads •  The number of threads to use list and fetch block locations for the specifi ed input paths. •  Setting this property in Spark jobs doesn’t help.
  • 25. File listing for partitioned table Partition path Seq[RDD] HadoopRDD HadoopRDD HadoopRDD HadoopRDD Partition path Partition path Partition path Input dir Input dir Input dir Input dir Sequentially listing input dirs via S3N file system. S3N S3N S3N S3N
  • 26. SPARK-9926, SPARK-10340 •  Symptom •  Input split computation for partitioned Hive table on S3 is slow. •  Cause •  Listing files on a per partition basis is slow. •  S3N file system computes data locality hints. •  Solution •  Bulk list partitions in parallel using AmazonS3Client. •  Bypass data locality computation for S3 objects.
  • 27. S3 bulk listing Partition path ParArray[RDD] HadoopRDD HadoopRDD HadoopRDD HadoopRDD Partition path Partition path Partition path Input dir Input dir Input dir Input dir Bulk listing input dirs in parallel via AmazonS3Client. Amazon S3Client
  • 28. Performance improvement 0 2000 4000 6000 8000 10000 12000 14000 16000 1 24 240 720 seconds # of partitions 1.5 RC2 S3 bulk listing SELECT * FROM nccp_log WHERE dateint=20150801 and hour=0 LIMIT 10;
  • 30. Hadoop output committer •  How it works: •  Each task writes output to a temp dir. •  Output committer renames first successful task’s temp dir to fin al destination. •  Problems with S3: •  S3 rename is copy and delete. •  S3 is eventual consistent. •  FileNotFoundException during “rename.”
  • 31. S3 output committer •  How it works: •  Each task writes output to local disk. •  Output committer copies first successful task’s output to S3. •  Advantages: •  Avoid redanant S3 copy. •  Avoid eventual consistency.
  • 32. Hive insert overwrite •  How it works: •  Delete and rewrite existing output in partitions. •  Problems with S3: •  S3 is eventual consistent. •  FileAlreadyExistException during “rewrite.”
  • 33. Batchid pattern •  How it works: •  Never delete existing output in partitions. •  Each job inserts a unique subpartition called “batchid.” •  Advantages: •  Avoid eventual consistency.
  • 35. Big data portal •  One stop shop for all big data related tools and services. •  Built on top of Big Data API.
  • 36. Out of box examples
  • 37. •  Zero installation •  Dependency management via Docker •  Notebook persistence •  Elastic resources On demand notebooks
  • 39. Quick facts about Titan •  Task execution platform leveraging Apache Mesos. •  Manages underlying EC2 instances. •  Process supervision and uptime in the face of failures. •  Auto scaling.
  • 40. Ephemeral ports / --net=host mode Zeppelin Docker Container A 172.X.X.X Host machine A 54.X.X.X Host machine B 54.X.X.X Pyspark Docker Container B 172.X.X.X Titan cluster YARN cluster Spark AM Spark AM
  • 43. Iterative job 1. Duplicate data and aggregate them differently. 2. Merging aggregates back.
  • 46. Q&A