Artificial intelligence (AI), machine learning, and big data have been discussed so much in the past few years, they've almost become like buzzwords.
Yet, the concepts present many opportunities and challenges and it is imperative marketers and communicators educate themselves on the subject, what AI is and does, and how to adapt.
The content in this presentation is based on original research Martin conducted, as part of the McMaster/Syracuse Master in Communications Management program.
Focusing on single-purpose, artificial narrow intelligence, we discuss several key issues including: algorithm bias, ethics and transparency, human/AI agent relationships and the future of jobs.
Please see slide notes for additional details.
28. Connect with me
▪ Martin Waxman
Communications
▪ LinkedIn Learning
▪ #AIinPR Panel
▪ @martinwaxman
▪ martinwaxman@gmail.com
28
Editor's Notes
Narrow AI is single-purpose and designed to do one task better than a human. Examples include digital voice assistants, IBM Watson, self-driving cars.
General AI is when a machine can transfer learning and intelligence from one task to another, similar to us.
Superintelligence, often referred to as the Singularity, is when machines are smarter than humans are or ever will be and underscore the importance of establishing an ethical, beneficial and human-centric AI.
Structured data – organized and labeled, what you’d find in a spreadsheet.
Unstructured data – the words we write and say or the images in photos, video and the world around us.
Semi-structured data – a combination of the two. Twitter is a good example. Your user name, number of followers, number of tweets, etc. is structured, while the content of your tweets (words and images) are unstructured.
Natural language processing can understand what the context and meaning of words and predict what might come next.
Computer vision is a machine’s ability to see the world and identify what’s in it.
Machines learn in a similar way to children–by the experience they get from multi-sensory data and a lot of it.
Important to understand the type of data you’re generating, mpeg, jpg, txt, csv and how you’re going to use them.
Image created by Karen Hao, MIT Technolgy Review journalist.
An algorithm is simply a set of steps—or recipe—that’s used to solve a problem.
The key to narrow AI is statistical predictions made by processing lots and lots of data.
Google semantic search is a good example of how AI understands context.
Data bias and a lack of data diversity are among the biggest issues in AI.
It’s why Alexa and Google Assistant have trouble understanding questions in accented English.
Or why Amazon’s facial recognition software matched 28 members of Congress to the mug shots of felons.
We have to get this right, because facial recognition mistakes could change an innocent person’s life.
This underscores the importance of communicating with the highest ethical standards and ensuring organizations safeguard people’s data and privacy.
Google recently announced the public launch of its Google Duplex voice assistant, which can call restaurants and hair salons to book reservations or appointments.
Duplex sounds real—complete with the umms and uhhs we associate with humans and was so good people didn’t realize they were talking to a machine.
The company was criticized for that—and rightly so. Now Duplex identifies itself as a digital assistant.
It became more transparent.
Transparency in communications is another element that falls under the purview of PR.
Three of the factors that help engender trust in a relationship are speed, consistency, and accuracy.
And machines have two of those down, and are getting better at the third (accuracy), because of the data we willingly provide.
Your chatbot texts back with an answer right away—your BFF, not so much.
As machines start to look and sound more like us and give us the answers we need, we could start to trust bots more than people.
Just think about it, if a passenger in your car contradicts your Google map directions, who do you believe?
I’ve just completed some research on human AI agent relationships and how they might affect communications and trust.
At the present time, it appears we trust humans when it comes to opinion and machines when it comes to information or facts.
One of my findings from the participants I interviewed, including computer scientists, journalists, researchers, digital communicators, agency owners and entrepreneurs) was that virtually all participants thought people should be polite to AI agents.
Because that’s part of our humanity—who we are, and how we relate to people.
Managing relationships—again, that sounds a lot like PR.
Then, there’s the matter of media credibility and authenticity.
Can we believe the things we read and see?
False news or disinformation shared on social media has shown us how difficult that can be.
And now we have DeepFake videos and natural language processing that can create synthetic media when given a prompt. It’s getting better at predicting which words or phrases should come next.
You can see how, in the wrong hands, it could automate the creation of fake news.
And, even if it’s used for good, it could take away even more jobs from journalists and communications professionals.
But it’s not all bad.
Deepfakes can also be used for entertainment.
For example, the Salvador Dali Museum in Florida created an interactive Dali video that talks to you and asks if you want to take a selfie with the master himself.
Which brings us to the matter of work and all the disruption we’ll face when machines replace humans on a wide scale.
Even with all the studies, there’s no real consistency with how bad—or how positive—the upheaval will be.
But right now, IBM Watson’s HR app can predict with 95% accuracy when someone’s thinking of leaving their job.
Paul Daugherty and H. James Wilson are two Accenture consultants who fall on the optimistic side of the perspective.
They envision a kind of human/machine collaboration.
People will be responsible for training machines in empathy, communicating how AI decisions are made, ensuring data quality and security, and ethical behavior.
Yet, if you work in a job made up of repetitive tasks, including factory workers, taxi and truck drivers, accountants, human resources, stock brokers, financial advisors, legal researchers, radiologists, and marketers and communicators, you’re at risk of being replaced for no reason other than machines are cheaper and more efficient.
How do we communicate that, and develop and deliver employee training programs in an empathetic, people-centric way?
Another job for PR?
In many ways, the speed and scope of the changes are like the early days of social media; a combination of excitement, enthusiasm, and trepidation.
However, Christopher S. Penn observed there’s a deeper knowledge gap between what marketers and communicators know about digital media and need to know about machine learning technology.
And that, coupled with our fear of mathematics could pose a barrier to adoption.
This underscores the importance of learning to understand and communicate with data scientists, data analysts and IT professionals and building relationships with them.
Recommendations:
The AI knowledge gap is an opportunity to break down barriers between academia and the industry, and spark a dialogue and partnership. Researchers could produce new theories around the effects of AI on communications, relationships, and trust. And industry could test these in a real-world setting. For instance, the government of Finland realized they didn’t have the resources to conduct breakthrough AI research. So they partnered with industry and educational institutions to create a free online course designed to help its workers become best in class AI implementers. That implementer role fits well with PR.
And speaking of training, we need to reimagine the public relations and marketing curriculum and integrate data science, fundamentals of coding, and statistical analysis into marketing and communications programs.
Right now, we can lead an organization-wide AI audit that identifies potential issues, opportunities, gaps, and challenges, and positions the communications team as a strategic integrator. Senior communicators can start the conversation by meeting with the heads of all the other departments. We could ask tough questions around how they’re currently using AI, which jobs could benefit from automation, how that might affect the workforce, and what new skills and resources are required. We should also consider how to cultivate new relationships, and maintain the ones we have. And we must proactively discuss data privacy and the ethical use of AI.
Finally, we need to redefine our roles, whether as explainers of AI, writers of conversational interactions, chief ethics and privacy officers, and guardians on an organization’s reputation and culture. All of these are strategic roles. This is especially important amid the coming job upheaval, and underscores the importance of training.
Here are four books to read to help you on your AI learning journey.