4. Vals, vars and defs
[wpitula@wpitula-e540 tmp]$ sbt console
...
Welcome to Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.8.0_45).
Type in expressions to have them evaluated.
Type :help for more information.
scala> var foo = 1
foo: Int = 1
scala> def fooMultipliedBy(x: Double) = foo*x
fooMultipliedBy: (x: Double)Double
scala> val result = fooMultipliedBy(2)
result: Double = 2.0
scala> result = fooMultipliedBy 3
<console>:10: error: reassignment to val
scala> foo = 2
foo: Int = 2
scala> fooMultipliedBy 2
res1: Double = 4.0
3
6. Classes and Objects
scala> class Person(age:Int = 22) {
| def canDrink(limit:Int = 18) = age >= limit //public by default
| }
defined class Person
scala> (new Person).canDrink()
res2: Boolean = true
scala> (new Person(18)).canDrink(21)
res3: Boolean = false
scala> object Person {
| def inAgeRange(from: Int, to: Int) = new Foo(from+Random.nextInt(to-from))
| }
defined object Person
scala> Person.inAgeRange(15, 17).canDrink()
res4: Boolean = false
5
7. Classes and Objects 2
∙ case class can be seen as plain and immutable data-holding objects that
should exclusively depend on their constructor arguments.
∙ case class = class + factory method + pattern matching + eqals/hashcode +
toString + copy
scala> case class Rational(n: Int, d: Int = 1)
defined class Rational
scala> val (a, b, c) = (Rational(1,2), Rational(3,4), Rational(1,2))
cBar1: Rational = Rational(1,2.0)
cBar2: Rational = Rational(3,4.0)
cBar3: Rational = Rational(1,2.0)
scala> a == c
res0: Boolean = true
scala> a.copy(d = 3)
res1: Rational = Rational(1,3)
6
18. Sbt
val sparkVersion = ”1.2.1”
lazy val root = (project in file(”.”))
.settings(
name := ”spark-streaming-app”,
organization := ”pl.wp.sparkworkshop”,
version := ”1.0-SNAPSHOT”,
scalaVersion := ”2.11.5”,
libraryDependencies ++= Seq(
”org.apache.spark” %% ”spark-core” % sparkVersion % ”provided”,
”org.apache.spark” %% ”spark-streaming” % sparkVersion % ”provided”,
”org.scalatest” %% ”scalatest” % ”2.2.1” % ”test”,
”org.mockito” % ”mockito-core” % ”1.10.19” % ”test”
),
resolvers ++= Seq(
”My Repo” at ”http://repo/url”
))
.settings(
publishMavenStyle := true,
publishArtifact in Test := false,
pomIncludeRepository := { _ => false},
publishTo := {
val repo = ”http://repo/url”
if (isSnapshot.value)
Some(”snapshots” at nexus + ”content/repositories/snapshots”)
else
Some(”releases” at nexus + ”content/repositories/releases”)
})
17
19. Exercise
”A prime number (or a prime) is a natural number which has
exactly two distinct natural number divisors: 1 and itself. Your
task is to test whether the given number is a prime number.”
def isPrime(x: Int): Boolean > pl.wp.sparkworkshop.scala.exercise6
18
23. RDD
An RDD is an immutable, deterministically re-computable,
distributed dataset.
Each RDD remembers the lineage of deterministic operations
that were used on a fault-tolerant input dataset to create it.
Each RDD can be operated on in parallel.
22
24. Sources
val conf = new SparkConf().setAppName(”Simple Application”)
val sc = new SparkContext(conf)
∙ Parallelized Collections
val data = Array(1, 2, 3, 4, 5)
val distData = sc.parallelize(data)
∙ External Datasets: Any storage source supported by Hadoop:
local file system, HDFS, Cassandra, HBase, Amazon S3, etc.
Spark supports text files, SequenceFiles, and any other
Hadoop InputFormat.
scala> val distFile = sc.textFile(”data.txt”)
distFile: RDD[String] = MappedRDD@1d4cee08
23
25. Transformations and Actions
RDDs support two types of operations:
∙ transformations, which create a new dataset from an
existing one
∙ actions, which return a value to the driver program after
running a computation on the dataset.
All transformations in Spark are lazy, in that they do not
compute their results right away. Instead, they just remember
the transformations applied to some base dataset (e.g. a file).
The transformations are only computed when an action
requires a result to be returned to the driver program.
24
26. Transformations
map[U](f: (T) => U): RDD[U]
Return a new distributed dataset formed by passing each element of
the source through a function func.
filter(f: (T) => Boolean): RDD[T]
Return a new dataset formed by selecting those elements of the
source on which func returns true.
union(other: RDD[T]): RDD[T]
Return a new dataset that contains the union of the elements in the
source dataset and the argument.
intersection(other: RDD[T]): RDD[T]
Return a new RDD that contains the intersection of elements in the
source dataset and the argument.
groupByKey(): RDD[(K, Iterable[V])]
When called on a dataset of (K, V) pairs, returns a dataset of (K,
Iterable<V>) pairs.
and much more
25
27. Actions
reduce(f: (T, T) => T): T
Aggregate the elements of the dataset using a function func (which
takes two arguments and returns one)
collect(): Array[T]
Return all the elements of the dataset as an array at the driver
program.
count(): Long
Return the number of elements in the dataset.
foreach(f: (T) => Unit): Unit
Run a function func on each element of the dataset.
and much more
26
28. spark-shell
Just like Scala REPL but with SparkContext
> ./bin/spark-shell --master ”local[4]”
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Welcome to
____ __
/ __/__ ___ _____/ /__
_ / _ / _ ‘/ __/ ’_/
/___/ .__/_,_/_/ /_/_ version 1.3.0
/_/
Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_31)
Type in expressions to have them evaluated.
Type :help for more information.
Spark context available as sc.
SQL context available as sqlContext.
scala> sc.parallelize(List(”Hello world”)).foreach(println)
Hello world
27
30. spark-sumbit
Application jar
A jar containing the user’s Spark application. Users should
create an ”uber jar” containing their application along with its
dependencies. The user’s jar should never include Hadoop or
Spark libraries, however, these will be added at runtime.
./bin/spark-submit
--class org.apache.spark.examples.SparkPi
--master spark://10.0.0.1:7077,10.0.0.2:7077
--executor-memory 20G
--total-executor-cores 100
/path/to/examples.jar
1000
29
33. Underlying Akka
”Akka is a toolkit and runtime for building highly concurrent,
distributed, and resilient message-driven applications on the
JVM.”
case class Greeting(who: String)
class GreetingActor extends Actor with ActorLogging {
def receive = {
case Greeting(who) => log.info(”Hello ” + who)
}
}
val system = ActorSystem(”MySystem”)
val greeter = system.actorOf(Props[GreetingActor], name = ”greeter”)
greeter ! Greeting(”Charlie Parker”)
32
36. Master, Worker, Executor and Driver
Driver program
The process running the main() function of the
application and creating the SparkContext
Cluster manager
An external service for acquiring resources on the
cluster (e.g. standalone manager, Mesos, YARN)
Worker node
Any node that can run application code in the cluster
Executor
A process launched for an application on a worker
node, that runs tasks and keeps data in memory or disk
storage across them. Each application has its own
executors.
35
38. Job, Stage, Task
Job
A parallel computation consisting of multiple tasks that
gets spawned in response to a Spark action (e.g. save,
collect).
Stage
Each job gets divided into smaller sets of tasks called
stages that depend on each other (similar to the map and
reduce stages in MapReduce); you’ll see this term used in
the driver’s logs.
Task
A unit of work that will be sent to one executor
37
44. DataFrame
A DataFrame is a distributed collection of data organized into
named columns.
DataFrame ≈ RDD[Row] ≈ Rdd[String] + schema
43
45. DataFrame Operations
val sc: SparkContext // An existing SparkContext.
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
// Create the DataFrame
val df = sqlContext.read.json(”examples/src/main/resources/people.json”)
// Show the content of the DataFrame
df.show()
// age name
// null Michael
// 30 Andy
// 19 Justin
// Print the schema in a tree format
df.printSchema()
// root
// |-- age: long (nullable = true)
// |-- name: string (nullable = true)
// Select only the ”name” column
df.select(”name”).show()
// name
// Michael
// Andy
// Justin
44
46. DataFrame Operations 2
// Select everybody, but increment the age by 1
df.select(df(”name”), df(”age”) + 1).show()
// name (age + 1)
// Michael null
// Andy 31
// Justin 20
// Select people older than 21
df.filter(df(”age”) > 21).show()
// age name
// 30 Andy
// Count people by age
df.groupBy(”age”).count().show()
// age count
// null 1
// 19 1
// 30 1
45
47. SQL Queries
case class Person(name: String, age: Int)
// Create an RDD of Person objects and register it as a table.
val people = sc.textFile(”examples/src/main/resources/people.txt”)
.map(_.split(”,”)).map(p => Person(p(0), p(1).trim.toInt)).toDF()
people.registerTempTable(”people”)
// SQL statements can be run by using the sql methods provided by sqlContext.
val teenagers =
sqlContext.sql(”SELECT name, age FROM people WHERE age >= 13 AND age <= 19”)
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
val negativesQuery = s”””select event
|from scoring.display_balanced_events lateral view explode(events) e as event
|where event.label=0”””.stripMargin
val negatives = hc.sql(negativesQuery).limit(maxCount)
46
54. Example
> pl.wp.sparkworkshop.spark.streaming.exercise1.SocketWordsCount
val conf = new SparkConf().setAppName(”Example”)
val ssc = new StreamingContext(conf, Seconds(10))
// Create a DStream that will connect to hostname:port, like localhost:9999
val lines = ssc.socketTextStream(”localhost”, 9999)
// Split each line into words
val words = lines.flatMap(_.split(” ”))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
// Start the computation
ssc.start()
ssc.awaitTermination() // Wait for the computation to terminate
53
55. ForeachRDD
import org.apache.spark.streaming.dstream.DStream
val dstream : DStream[(String, String)] = ???
// we’re at the driver
dstream.foreachRDD(rdd =>
//still at the driver
rdd.foreachPartition(partition =>
//now we’re at the worker
//anything has to be serialized or static to get here
partition.foreach(elem =>
//still at the worker
println(elem)
)
)
)
54
56. Checkpoints
∙ Metadata checkpointing
∙ Configuration
∙ DStream operations
∙ Incomplete batches
∙ Data checkpointing - Saving of the generated RDDs to
reliable storage. In stateful transformations, the generated
RDDs depends on RDDs of previous batches, which causes
the length of the dependency chain to keep increasing with
time.
55
57. Checkpoints - example
def ceateStreamingContext(): StreamingContext = {
val ssc = new StreamingContext(...) // new context
ssc.checkpoint(checkpointDirectory) // set checkpoint directory
val lines = ssc.socketTextStream(...) // create DStreams
lines.checkpoint(Seconds(120))
...
ssc
}
// Get StreamingContext from checkpoint data or create a new one
val context = StreamingContext.getOrCreate(checkpointDirectory,
ceateStreamingContext _)
// Start the context
context.start()
context.awaitTermination()
56
59. Tunning
∙ Reducing the processing time of each batch of data by
efficiently using cluster resources.
∙ Level of Parallelism in Data Receiving
∙ Level of Parallelism in Data Processing
∙ Data Serialization
∙ Setting the right batch size such that the batches of data can
be processed as fast as they are received (that is, data
processing keeps up with the data ingestion).
58
60. Futher reading
∙ Programming guides(core, sql, streaming)
∙ Integration guides(kafka, flume, etc.)
∙ API Docs
∙ Mailling list
59