Execs from Google, IBM and Salesforce were questioned about the wider societal implications of their technologies during a panel session here at Mobile World Congress.
Behshad Behzadi, who leads the engineering teams working on Google’s eponymously named AI voice assistant, claimed many jobs will be “complemented” by AI, with AI technologies making it “easier” for humans to carry out tasks.
“For sure there is some shift in the jobs. There’s lots of jobs which will [be created which don’t exist today]. Think about flight attendant jobs before there was planes and commercial flights. No one could really predict that this job will appear. So there are jobs which will be appearing of that type that are related to the AI,” he said.
“I think the topic is a super important topic. How jobs and AI is related — I don’t think it’s one company or one country which can solve it alone. It’s all together we could think about this topic,” he added. “But it’s really an opportunity, it’s not a threat.”
“From IBM’s perspective we firmly believe that every profession will be impacted by AI. There’s no question. We also believe that there will be more jobs created,” chimed in Bob Lord, IBM’s chief digital officer. “We also believe that there’ll be more jobs created.
“I firmly believe that augmenting someone’s intelligence is going to get rid of… the mundane jobs. And allow us to rise up a level. That we haven’t been able to do before and solve some really, really hard problems.”
Though Lord at least mentioned the conjoined need for retraining to ensure that existing employees and workforces are not left behind the blistering march of technology.
“In every profession, no one is going to be untouched by [AI]. Even when you talk about the creative world, we’re working with music producers right now to be their creative muse, to help them write songs, hit songs,” he said. “So I don’t think there’s any profession in the world that will not be hit by artificial intelligence in the coming years.”
The question of how to pay for mass retraining and upskilling programs necessary for the workers whose mundane jobs are handed to robots went undiscussed.
Salesforce’s John Carney, SVP for telecoms & the media industry, was also keen to spin the accelerating AI-powered shift towards increasing automation as a job creation opportunity. But he conceded it could be a threat to even white collar workers who do not pro-actively upskill and get to grips with using AI-enabled tools to intelligently augment their labor.
“If you look back in history with these paradigm shifts, these transitions, the data says that we created way more jobs than were eliminated,” he said. “So I think that’s going to happen again. There is going to be a transition. There’s going to be a need for education, there’s going to be a need to — especially people like me, my age — to try to work in this new world.”
“So I think it’s incumbent on both of us, both the folks that are educating, and the folks that need to be open to lifelong learning,” he continued. “I would say that AI’s going to change all of our jobs — mostly in a positive way. And I would say that I don’t believe AI is going to eliminate managers and the things that we do.
“But I will tell you that I think managers who use AI are going to have an advantage over managers that don’t use AI. So I want everyone to get out there and start playing with this. And start experimenting.”
World Economic Forum session moderator, Isabelle Mauro, also asked for the panel’s thoughts on governance policies and how lawmakers can find a model to manage increasingly complex algorithmic technologies — and indeed the increasingly powerful tech giants which are applying AI at scale, powered by their big data holdings.
Google’s Behzadi stayed quiet on this question. But Salesforce’s Carney spoke up to voice concern about the risk of bias in data sets used to train AIs carrying over and into problematic automated decisions.
“Transparency into how these algorithms are generating these predictions — that’s really important,” he said. “It doesn’t mean it’s easy to do.
“Because there can be bias built into these models and you’ve got to let people see how these decisions and these predictions are being made. So there’s work to do.”
“I don’t think it’s just an AI question. It’s a data ownership question as well,” added IBM’s Lord, advocating plainly for education rather than regulation — albeit, again, without discussing how the necessary re-skilling programs could be funded.
Instead he discussed children he’s met around the world who he said are keen to learn coding and already expressing curiosity about AI technologies.
He then asked for an IBM video clip to be screened which featured a 12-year-old girl saying she’s more interest in learning coding than learning other languages “like French and Spanish” — apparently unaware of how tone deaf such a suggestion might appear to an audience comprised of people from around the world and taking place in a Catalan-speaking region of Spain.
It was left to Mauro to quip nervously that it’s “very scary” if children see no point in learning French to a French speaker like her.
“There’s a wave of innovation that’s going to come… that there’s no way we’re going to be able to control. We just have to educate people about being responsible for it,” added Lord, pitching the notion that the rise of automation is both inevitable and all but uncontrollable. “There’s always good with bad, right. You have to always stay on the side of good.
“I think to regulate something like this, you couldn’t regulate Moore’s Law. You couldn’t regulate Metcalfe’s Law. We’re at the same epicenter around AI.”
The panel session ended with a handful of questions taken from the audience — sourced via Slido.com. The question submitted by your TechCrunch correspondent received the most votes, and Mauro selected it as the first of the audience asks she put to the panel.
Our question being: What happens to privacy when AI is tracking everything?
Lord jumped in first on this, claiming IBM’s position on privacy is “pretty strong” because it’s operating “in an opt-in world”.
“There has to be a sense that you as the consumer opt in to use your data in an interesting way. I think it’s for a brand to use your data responsibly,” he said, describing how he’s happy to give Starbucks his data because the coffee company gives him something he values (“good coffee”).
“So the permission base of, ‘I will give you so much data about me so you can use AI and you can use analytics to provide with an increased value exchange’, that’s where you’re going to get that value.
“But clearly we’re not using AI in any situation to track people that don’t want to be tracked. Or behaviors or doing that in ways that don’t make any sense.”
Google’s Behzadi didn’t sit this one out — taking the opportunity to make an impromptu demo of Google’s voice assistant by asking how his favorite football team are doing, which in turn revealed the AI knows who his favorite team is without him having to literally name the team.
The implication being that users are willing to trade a portion of their privacy for that kind of incremental convenience. Although he conceded that control of which data gets shared is an important balancing consideration for algorithm builders.
“My [AI] assistant knows what my favorite team is — I don’t want to necessarily repeat that each time. The same way that I want to say how’s the traffic to home and work. So you have to — there’s certain types of information that I do want to share with the assistant but of course we need to always try to have the possibility to say that these types of information you don’t want [to be tracked] because it’s — because privacy [gets] addressed first,” he said, arguing that concerns about privacy can be balanced if you give users enough control over what aspects of their lives your technology tracks.
“One general principle around privacy is to have, really, users to be in control. It’s not necessarily something new for AI, it’s also for all the other types of data-based algorithms which we had already before. So as long as we try to always make sure that people are in control of what’s being used, potentially, and what’s not and what is the benefit of the data that I’m sharing.”