This document discusses building serverless chatbots using PubNub and IBM Watson Conversation. It introduces PubNub BLOCKS as a way to run serverless JavaScript functions. It then demonstrates three chatbots built with these technologies: a heart emoji replacing bot, an image processing bot, and a music trivia bot that uses Watson APIs. The document emphasizes using serverless architectures and stateless AI microservices with context stored in the network to build scalable chatbots.
2. 2
Housekeeping Items
Webinar slides, live recording and Q&A will be emailed
Ask questions at any time during the presentation
Use chat window on the webinar panel
We're on Twitter: @IBMWatson #BuildingWithWatson
3. Watson / Presentation Title / Date3
Our Expert
Josh Marinacci
Head of Developer Evangelism
PubNub
josh@pubnub.com
5. 5
What does Serverless Mean
Program functions, not apps or servers.
CDN for computation, in the network, near the user.
Acts as a single computer
Talks to 3rd party micro-services
Infinitely Scalable
6. 6
Introducing PubNub BLOCKS
Promise based modern JavaScript
Runs in the network, nearest to user
Very low latency
Very high security
Infinitely Scalable
7. 7
Chatbot Requirements
Realtime Infrastructure
Some level of Artificial Intelligence
Domain specific knowledge
But remember...
Requirements, not focus
So...
Don’t build your own Realtime Infrastructure
Don’t build your own Artificial Intelligence
28. 28
Are you ready to get started?
BLOCKS Catalog:
https://www.pubnub.com/blocks-catalog/
Conversation API BLOCK:
https://www.pubnub.com/blocks-catalog/pubnub-tutoring-bot/
Conversation Documentation:
https://www.ibm.com/watson/developercloud/doc/conversatio
n/index.html
29. 29
What’s next?
Look out for a follow up email with a copy of these slides, a recording of the
webinar, Q&A recap, and additional resources
The series will continue bi-weekly on Wednesdays @ 1pm ET / 10am PT
Advanced Audio Transcription with Watson Speech-to-Text - March 8
Easily Deploy your Chat Bot to Multiple Channels with Stamplay - March 22
30. 30
Thank you for attending!
Contact us
Phone: 1-877-253-0308
Email: wdc-dev@bg.vnet.ibm.com
Online: Watson Developer Cloud
Editor's Notes
these were all built with a serverless platform this is a serverless platform, not a cloud provider. the different is that you aren't provisioining a server, even a virtual one. there is no administration. you are working at the level of functions that you put into the network.
also, you aren't just switching one providers cloud server for another. this code runs in the network. it's more like a CDN. the message goes from the phone to some place on the edge. it goes to whatever part of the network is closest to the end user.
of course there are real servers somewhere, but it's serverless because there isn't one server or even a group of servers. underneath is computing infrastructure around the globe acting as a single unit, always putting computation as close as possible to the end user for the lowest latency and greatest security.
the serverless provider then talks to an service for the AI. it may also talk to other services for other parts of the chatbot. the important thing is that provider in the middle has the controlling computation and stores all the state. Each of these providers are simple microservices that do one thing well and are as stateless as possible, making them easier to build and more reliable.
Further more, as your chatbot becomes more popular which we hope it well, you don't have to worry about scaling. the infrastructure scales for you. Most of the providers in this space have usage based pricing so your costs only go up when your revenue does. No cliffs to worry about. Scaling becomes a non-issue, so you can focus on what you are trying to actually do.
When you make a chatbot you need some sort of realtime infrastructure. A chatbot involves constant communication between the end user and your bot, possibly with other platform proxies in the way, and possibly with other webservices which provide the knowledge or actions that the chatbot needs. So you need some realtime infrastructure to tie all this together with very low latency and high security.
The other thing you need is some level of Artificial Intelligence. Some chatbots need full natural language processing. some need a backend end with a rich neural net. Some just need a glorified phone tree. It really depends on what you are doing.
so today I'm going to show you how to build a simple chatbot we are going to use serverless infrastructure and some 3rd party services. But before we get into how to build them. let me show them to you.
Okay. all of these were built with the same basic architecture. The client, which can be a phone or webpage or really anything, talks to the realtime network provider. In this case PubNub.
For the Emoji demo all of the computation is done in the network's serverless compute system called PubNub BLOCKS.
so today I'm going to show you how to build a simple chatbot we are going to use serverless infrastructure and some 3rd party services. But before we get into how to build them. let me show them to you.
The cloudinary image chat demo used a simple natrual language parser implemented as JavaScript running in a BLOCK. Then it hands it off to cloudinary to do the image processing.
the important thing here is that the network and the computation is as close as possible to the end user. if the user is here, then it uses a compute node near them if over here, then uses one over here. this network spans the globe so it's also as close as possible to the user. when it communicates between users or to a 3rd party service then it can route to wherever is closest.
so today I'm going to show you how to build a simple chatbot we are going to use serverless infrastructure and some 3rd party services. But before we get into how to build them. let me show them to you.
Now here's the Mr Rockbot diagram again.
Mr rockbot is the most complicated because he uses several services. In my original version I used IBM Alchemy language API to look at the input text. it pulls out entities and intents. Unfortunately this API is really meant for looking at larger documents, not tiny snippets of text.
However, IBM recently introduced a new api specifically for conversational interactions called, appropriately enough: Conversations. It was originally called the Dialog Service, so you might see some tutorials with that name. With Conversations you actually train the system by giving it examples of the kinds of things you are looking for.
First you create 'intents.' These are the things the user could ask the chatbot to do for it. For example, if this was a home automation bot you might use an intent like 'turn on'.
Next you create entities. These are things the user could ask about, or are the target of an intent. For example: in a home automation bot you might use an entity like 'lights' or 'door'. You can also specify synonyms so that it can recognize many forms of the request.
Finally you can create dialogs. These are workflows that the user can go through. This lets you specify what the bot actually says to the end user in different circumstances. If you arlready have a knowledgebase of facts and responses then you can skip this part and just use the intents and entities.
After you teach Watson about your problem domain you can call it from your serverless code using a simple HTTP POST. It's important to note that Conversations is a stateless API. In order to understand the context of a conversation you have to provide this context on each request using a context structure. I'm storing this in the serverless platform. But remember that this code will always be run on the edge nearest to the end user, and that user might move. So i store this context in our Key Value store which is eventually consistent. If the user moves to another part of the network the context will follow them.