Machine learning is one of the most promising and most difficult to understand fields of the modern age. Here are the slides from Slater Victoroff's (CEO of indico) talk at General Assembly Boston for non-technical folks on how to separate the signal from the noise -- stay tuned for the next time he speaks:
https://generalassemb.ly/education/machine-learning-for-non-technical-people
3. Who am I?
• Slater Victoroff
• Olin College of Engineering
• Typical young hoodie, flip-flop
wearing entrepreneur
• Someone who cares very
deeply about machine learning
• CEO of indico
5. Such a big buzzword.
Here’s what it comes down to in a human definition:
A class of computer algorithms and mathematical
models that allow machines to perform general tasks,
like identifying human faces in photos. The models
are used to make predictions and decisions, which
you can then use to solve real world problems, such
as understanding how your customers feel about
your brand across various social media channels.
The neat thing is that instead of hiring 100 people to
analyze 1,000 data points each, you could get a
single machine to do it in a fraction of the time.
14. Language is blurry — sarcasm, etc.
Where there’s a gray area,
machine learning can solve the issue.
Computers are bad at the world
when there is inconsistency.
15. Say you’re a brand and you want to know what
people are saying about your brand.
You look through everyone talking about
your brand on Twitter, Facebook, etc..
Now you want to look at how popular
those people are to find your influencers.
And finally, you want to know… what are they talking about?
In the old spreadsheet way, we have always just ignored these
problems as they were in a gray area we couldn’t access.
A social media example
18. • Marty McFly ended up in 1955 which is the same year
that the first branch of ML came out (AI movie to come
later)
• Georgetown and IBM Cold War found ML to be useful as
they wanted to translate a large amount of Russian text
to analyze
• MIT went after the image side, teaching computers to
recognize objects and scenes. They tried to teach the
computer to look at a picture and determine a bird or a
plant.
20. CSAIL
• The Computer Science and Artificial Intelligence
Laboratory – known as CSAIL is the largest
research laboratory at MIT and one of the world’s
most important centers of information technology
research.
• Founded in the 1940’s by Marvin Minsky
21.
22. We’re pretty sure we bit
off more than we can
chew here
- ALPAC 1966
23. • Committees were spun up to precise translation
and recognition.
• In one solid decade, we effectively made no
progress. We had one-off ML systems.
• We could teach a computer to understand one
sentence by showing it that one sentence.
• We made no progress, spent a lot of money, and
cut the research. It was the death of an era.
During that time…
28. Sentiment analysis = determine if a piece of text is
positive or negative.
How do we do it?
Well, we map each word to its sentiment and give
the words a score.
AKA: A Lexicon-based approach
Sentiment Analysis
32. “I have to say, that while most of
my experiences at tourists traps
have been horrendous, the one I
recently went to broke the pattern.”
• Many humans can’t figure out the sentiment of this
sentence
• Gray areas of language = why sentiment analysis is
quite a difficult problem for computers to solve
35. • Well, it’s hard
• Take a spreadsheet
• Label each piece of text for positive vs.
negative
• Guess which words made it positive or negative
• Train the model on half of the spreadsheet and
then make predictions on the other half
Then what.
38. Customer Did they buy?
1 No
2 No
3 No
4 No
5 No
6 Yes
7 No
8 No
9 No
10 No
11 No
12 Yes
13 No
14 No
Performance Metrics
39. - Accuracy isn’t necessarily the best performance metric
- Predicting sentiment is a very different problem depending on whether the text
you’re making predictions on consists of Amazon reviews, tweets, or medical
journals
- It also depends on how much data you’ve got
- When you teach a computer what sentiment is, you end up showing it a huge
number of examples. Depending on the data you’ve got, the number of examples
you might use range from a few hundred to hundreds of millions
- It’s not fair to use those examples to check your model’s accuracy — you already
know the answers
Performance Metrics
40. Learn more about sentiment analysis and
performance metrics:
What Even Is Sentiment Analysis?
41. Precision: fraction of retrieved instances that are relevant
Recall: fraction of relevant instances that are retrieved
Precision vs Recall
42. Overfitting
This product left me with a deep feeling of regret.
This film left me with a deep feeling of regret,
love, and hopelessness for a life not lived.
I #love these new @nike shoes
43. Overfitting
• Overfitting means you “fail to generalise to examples outside of
your training set”
• In other words…you’re living under a rock. You’re great at
recognizing everything under your rock, but you don’t
understand the rest of the world
• Domain is a factor — there are so many different kinds of text
(scientific journal articles vs. tweets)
• No one model is going to be the best at every kind of text
For a more in-depth look at sentiment analysis, see this post: https://indico.io/blog/what-is-sentiment-analysis/
Accuracy isn’t necessarily the best performance metric
Predicting sentiment is a very different problem depending on whether the text you’re making predictions on consists of Amazon reviews, tweets, or medical journals.
It also depends how much data you’ve got.
When you teach a computer what sentiment is, you end up showing it a huge number of examples. Depending on the data you’ve got, the number of examples you might use range from a few hundred to hundreds of millions.
It’s not fair to use those examples to check your model’s accuracy — you already know the answers
Overfitting means you “fail to generalise to examples outside of your training set”
In other words…you’re living under a rock. You’re great at recognizing everything under your rock, but you don’t understand the rest of the world
Domain is a factor — there are so many different kinds of text (scientific journal articles vs. tweets)
No one model is going to be the best at every kind of text
Overfitting means you “fail to generalise to examples outside of your training set”
In other words…you’re living under a rock. You’re great at recognizing everything under your rock, but you don’t understand the rest of the world
Domain is a factor — there are so many different kinds of text (scientific journal articles vs. tweets)
No one model is going to be the best at every kind of text