00:00:00:18 – 00:00:09:12
Erika
This is Techie and the Biz a podcast to explain and simplify how business technology is changing and why it can benefit your organization.
00:00:12:15 – 00:00:37:05
Erika
Today we are going to discuss the topic of Deepfake. Deepfake has been making headline news almost every day and is top of mind for most companies and consumers. Deepfake is defined as artificially created video and audio clips in which artificial intelligence, AI is used to replicate. I’m going to emulate the voice and image of a subject so that the subject appears to say whatever the creator wishes them to say.
00:00:37:18 – 00:01:10:09
Max
We are excited to be joined today by one of the world experts on the topic and the author of the book Deepfakes. Nina Schick is one of the first globally recognized experts on generative AI. She is the founder of an air consulting firm to Maine Ventures, advising businesses and governments, including the U.S. and NATO. She’s spoken to audiences at major venues around the world from CBS and Tedx to Microsoft, Adobe, DARPA and the United Nations.
00:01:10:21 – 00:01:18:11
Max
Nina appears regularly on prominent media such as the BBC, Bloomberg and CNBC. Welcome, Nina.
00:01:18:22 – 00:01:19:09
Erika
Welcome.
00:01:19:13 – 00:01:21:12
Nina
Great to be here. Thanks for having me.
00:01:22:00 – 00:01:27:17
Erika
How did you end up as the world expert in generative AI? This is always been your career path.
00:01:29:11 – 00:01:53:21
Nina
Well, I think just like with many good things into life, I just kind of stumbled onto it. Right? For me, really, the story starts with my background in geopolitics, where for kind of two decades I was working with global leaders, various political groups, campaign groups on kind of seismic political issues shaping the world from Russia’s early annexation of Ukraine.
00:01:53:21 – 00:02:25:04
Nina
All the way back in 2013, the shooting down of MH 17, election interference in the U.S. and also around Europe. So the predominant I also worked on Emmanuel Macron’s electoral campaign in Europe. So the predominant theme of my political career was really how technology was transforming not only macro geopolitics, but also how it was reshaping the individual experience of everybody alive pretty much, and how quickly this had happened.
00:02:25:04 – 00:02:52:05
Nina
So exponential technology became something I was very interested in. And for me, the logical next step when you think about exponential technologies was artificial intelligence. And it was at the end of 2017 when I encountered this at the time, really new form of artificial intelligence. If you think about traditional artificial intelligence, it’s more AI that can classify or label data.
00:02:52:16 – 00:03:22:20
Nina
And this was a new type of AI that could actually create or generate data on the first kind of viral form of this AI with so-called deepfakes, so called deepfakes, the first viral form of generative AI. But I kind of had this aha moment back in 2017 when I realized that this was going to be so important. And I decided not only to write a book about it, but spend kind of the years since then just digging into this field, which is now known as generative AI.
00:03:23:03 – 00:03:33:23
Nina
So it’s been very fascinating to see how in the past six months, the entire world now has come to understand that this is actually a technology that’s going to transform everything.
00:03:35:16 – 00:03:46:22
Erika
Why is it getting harder to spot Deepfake video content? And I also find it’s interesting that Deepfake is emerging at a time when video and media content are a large part of our communication.
00:03:47:08 – 00:04:19:05
Nina
So deepfakes, again, like if you consider conceive of them as the versus the first viral form of generative AI, so they are relatively nascent. Right. So when it became possible for AI to actually create new data, enthusiasts started using that AI to start creating synthetic or AI made video. Now it turns out that generous of AI has the capability to generate anything based on its training data, and that includes human biometrics.
00:04:19:05 – 00:04:45:23
Nina
So often when we’re talking about deepfakes, we’re talking about AI made media, including video which sent the sizes the biometrics of authentic people, or could actually be completely synthetic generated people. Now, this was pretty rudimentary back in 2017, So to the naked eye, you might spot that a video is not authentic because the technology was just nascent and evolving.
00:04:46:08 – 00:05:19:19
Nina
A few years down the line because the capabilities of generative AI and by extension also deepfakes are exponentially better. So anything you look at right now in terms of sophisticated AI generated content, including Deepfake videos, emulating a real person, there is no way that you’d be able to tell that that video is synthetic and not authentic. And the reason is because the AI is just so darn good at creating synthetic media.
00:05:20:24 – 00:05:32:09
Max
Wow. Well, what is I’m curious, the most common use for deepfakes that consumers and businesses really need to look out for now that you’re obviously saying like it’s almost impossible to spot the difference.
00:05:33:12 – 00:06:00:03
Nina
Yeah, it’s impossible to spot the difference. Just like with the internet. The first use case of deepfakes, right when it became possible to kind of clone people’s biometrics with AI, the first pioneering use case was in pornography. Now, this is a really horrendous use case which is universally targeted against women. So it’s this undeniably gendered phenomenon that was the first use case that was emerging at the end of 2017.
00:06:00:12 – 00:06:25:23
Nina
And at the time, I understood that this wasn’t just a tawdry women’s issues, because if you take the concept of being able to hijack someone’s biometrics with AI to appropriate it, them saying and doing things that they never did, this becomes a civil liberties issue. So very quickly from pornography, we’ve seen malicious deepfakes being used for.
00:06:27:21 – 00:06:29:07
Erika
Synthetic fraud.
00:06:29:07 – 00:06:48:15
Nina
In kind of cyber attacks. Also to emulate kind of the CEOs of large corporations. So it very quickly moves from pornography into not only myths and disinformation, but also into fraudulent attacks and penetration attacks into companies.
00:06:49:20 – 00:07:07:17
Max
Well, I’ve actually also been watching on the news almost daily now and political video deepfakes. Yeah, yeah, yeah. But but even just the audio deepfakes, they’re using AI to sound like family members to con people out of money. I mean, this seems like it’s it’s becoming almost standard now.
00:07:09:03 – 00:07:46:03
Nina
Yeah. So when I wrote my book, you know, one of the things and it’s so interesting because it’s fascinating to reflect on how quickly the capabilities of this technology have become normalized in the sense that at 2017, the mere concept, you know, that I could actually create highly compelling media that looked authentic, including video, which is traditionally so difficult to doctor, including audio, which again, is traditionally so difficult to doctor when you think about synthesizing somebody’s voice, you know, it would have seemed impossible.
00:07:46:03 – 00:08:09:00
Nina
But now not only is it possible, but it’s becoming democratized. And one of the really interesting things about deepfakes and actually all genders of age is that they can come in any form of digital media. So can be audio, video, images and text and what you are talking about, the kind of audio impersonations of people’s voice, so called vishing voice.
00:08:09:00 – 00:08:38:01
Nina
Phishing is increasingly being used as a tool of fraud because all that you need as these systems become better at generating content is training data. And in this case, if what I’m trying to do is hack your voice right? To clone your voice, I’ll need some training data of your voice. Now, in 2017, when I started looking at this to clone somebody’s voice, you needed hours and hours of training data so you couldn’t just clone anybody’s voice.
00:08:38:01 – 00:09:01:08
Nina
You would need somebody who has a lot of kind of public recordings out there and maybe a politic action. And it would be incredibly expensive because you need to kind of compute all of that. And you need this model to create a recreation of a single person’s voice. Now, the models have become much more sophisticated, the base models, and it means that anybody’s kind of voice can be cloned with up to as little as 3 seconds of training data.
00:09:01:19 – 00:09:27:11
Nina
So that’s one voicemail. One YouTube video, one linked LinkedIn post, one phone message, and that is potentially enough for your voice to be cloned with artificial intelligence. Of course, the democratization element of this means that whereas a few years ago this was only possible by very sophisticated actors who had, you know, some know how and some money and some resource.
00:09:27:23 – 00:09:47:12
Nina
Now, it’s possible pretty much for anybody to do it as well as anyone to be targeted, which is why you see it being used in that kind of age old scam, right? Yeah. Somebody’s calling you saying their loved one has been in an accident or they’re in prison. And how much more convincing is that call when it’s actually their voice?
00:09:47:13 – 00:10:07:08
Nina
Are you going to turn your son away or your daughter away or your husband away when it really sounds like you’re speaking to them, like, hey, I need $2,000 right now. I need to be bailed out of jail. Okay? You know, you’re not going to take the chance. You’re going to pay it right? So an age old scam becomes highly personal, sizable, and even more effective.
00:10:07:08 – 00:10:11:08
Nina
And these scams were effective anyway, without even vishing involved.
00:10:12:06 – 00:10:25:07
Max
Yeah. It’s just adds one more layer of credibility that. Yeah, makes perfect sense. So is Deepfake success based solely on the technology or is it our own brain interpreting what it sees or hears? Is that what’s causing the problem?
00:10:26:22 – 00:10:53:21
Nina
I mean, it’s both, right? Because on one hand the sophistication of the technology makes it all so more convincing because you can actually hear somebody’s voice. You can actually see them in a video saying or doing something right. This is a cognitive bias we have that’s called processing fluency. When something looks and sounds like it’s right, you want to believe that it’s true.
00:10:53:22 – 00:11:24:08
Nina
This is one reason why video, for example, is so compelling as evidence in a court of law, because people tend to see it as an extension of their own perception. If it’s captured on video or got the audio soundtrack, it must be true. So there’s certainly that element. Nonetheless, I mean, scams and disinformation, misinformation are age old problems that go hand in hand with the very kind of birth of civilization itself.
00:11:24:17 – 00:11:45:18
Nina
And you didn’t even need vishing or sophisticated AI generated videos before people were already falling for online scams or mis or disinformation and before the Internet. You know, this was happening in a kind of analog age as well. However, the scalability and the democratization and the sophistication shown is really what makes deepfakes different.
00:11:46:11 – 00:11:56:17
Erika
So interesting, the processing fluency. So it’s really your brain sees, wants to see it, sees what it wants to see, the brain sees what it wants to see. Yeah.
00:11:56:17 – 00:12:07:04
Nina
Or yeah. Or it wants to believe what seems plausible. Right? You saw it with your own eyes. You heard it with your own ears. Ergo, it must be true.
00:12:07:23 – 00:12:14:21
Max
So I guess it’s like those mirrors at high end hotels that make you look so much better than you do in a normal man.
00:12:15:06 – 00:12:16:11
Erika
Yeah, those mirrors.
00:12:16:20 – 00:12:20:15
Max
Those are great mirrors. The original form of. Of AI.
00:12:21:16 – 00:12:37:22
Erika
Right. It seems the rules that do exist in the U.S. are largely aimed at pornographic and political deepfake and not protecting everyone else. Are there any regulations to protect companies and consumers from becoming victims of deepfake both in the U.S. and globally?
00:12:39:19 – 00:13:23:02
Nina
So not that I know of. I think that when deepfakes first started emerging, the initial fear was that they would be used for political disinformation. Right. And we’ve already seen that happen now in the last few years, whether it’s kind of that deepfake video of President Zelensky of Ukraine surrendering and telling the soldiers to kind of lay down their weapons at a crucial point when the invasion of Ukraine by Russia had just started to numerous kind of deepfake videos of political figures in the U.S. from Donald Trump to Joe Biden, you name it, we see it in the political discourse.
00:13:23:11 – 00:13:54:01
Nina
And I think that is reflected also kind of in the policy and the regulation that we’ve seen. So I think that various states in the United States have passed laws to do with electoral interference and deepfakes. I think California as well in Texas is another I think New York might be another one. But the thinking on protecting consumers, you know, about having their identities hijacked in this way or really their civil liberties attacked in this way, we just really haven’t seen that yet.
00:13:54:01 – 00:14:35:00
Nina
We’ve seen something in the UK around non-consensual pornography through the use of AI, but that’s kind of been tied into revenge porn. And to me, the kind of broader issue that this raises, kind of the lack of policy or the lack of regulation is a, we’re just in this era of tech, exponential tech led change. So even for lawmakers or more broadly, just for society to kind of wrap their heads around what is happening and how quickly it’s happening and then try to pass policies to regulate it is super, super difficult, right, Because the pace of change is so quickly.
00:14:35:00 – 00:14:59:16
Nina
Again, the advances in AI that led to deepfakes are now kind of generous of I have all really started unfolding in the last five years. And second, when you think about deepfakes and actually you just take a step back and you see deepfakes as one of the first kind of malicious use cases of now what has become the much broader field of generative AI, which isn’t all malicious?
00:14:59:16 – 00:15:27:24
Nina
Not at all. You know, it’s something that’s going to completely transform the economy and the future of all knowledge work to try and put regulatory frameworks around something that’s going to have such a profound impact on society. Also in terms of huge potential economic abundance, changing the labor market, just like I say, changing the frameworks of society is a huge thing to kind of bite off into.
00:15:28:00 – 00:15:41:01
Nina
So yeah, we’re starting to see a few things. Haven’t really seen that much around consumer rights protection. But the bigger issue I think is just the kind of scale of and pace of change.
00:15:41:19 – 00:15:48:08
Erika
Right now with the upcoming election in the United States. What are some deepfakes we should be worrying about?
00:15:50:00 – 00:16:12:00
Nina
So love a US election. It’s going to be very, very interesting. My God. I mean, I am a keen proponent of, you know, your country’s politics and always fascinating to see what unfolds in the last few years have been very eventful, to say the least. Now, what you’ve seen in in the past, that’s.
00:16:12:00 – 00:16:12:23
Max
A great turn, by the way.
00:16:12:23 – 00:16:39:13
Nina
Events very eventful, colorful and loud. And this you’ve seen how the entire kind of information ecosystem around the election. And just more broadly, if you think about the kind of polarizing trends in discourse in the United States, you’ve seen how online the discourse has become very polarized and there’s been a lot of message disinformation. This was true even before AI generated content or deepfakes were in the game.
00:16:39:21 – 00:17:10:10
Nina
Right? But like I said, deepfakes or AI generated synthetic content is just so much more powerful. So we’ve already started to see manipulated media of politicians saying and doing things. We’ve actually already seen campaigns start to use AI generated content. So I think there was a campaign video by the RNC which depicted a dystopian future in which Biden won the reelection and what kind of the US would look like.
00:17:10:10 – 00:17:52:03
Nina
And I think that the Democrats would probably start using AI generated content in their campaign messaging as well. So that’s already fair game. You can now use AI generated content depicting scenes, depicting audio, depicting people saying and doing things that kind of never happened or imagining a future scenario, which, you know, is really fascinating. It’s already fair game in campaign material, but also I think increasingly, given how polarized the state of debate is in the country, as we get closer and closer to that Election Day, we’re going to start to see more and more manipulated content.
00:17:52:03 – 00:18:38:21
Nina
And of course, it’s manipulated by it’s going to be increasingly sophisticated. And I think that you will definitely see videos emerging where people are combating, whether it’s true or not. So you’ll have people debunking those videos saying, Oh, well, that’s just a deepfake Whilst others would be like, No, it’s not, it’s authentic. And this is actually one of the biggest issues with the proliferation of deepfakes or more broadly about AI generated content is that you lose your ability to discern what’s authentic or what’s not, because it’s not only that everything can be synthesized or created with AI, it’s also that once you understand that I can create any video of anyone saying or doing anything
00:18:38:21 – 00:18:59:19
Nina
or clone anyone’s voice, you, How do you know that anything is real? So even if something is real, you might be more likely to be like, Yeah, well, I don’t think that’s real. I think that’s a deepfake. So it’s kind of the corrosion of just any kind of trust in the medium of digital content. Yeah. Keep on chatting politically.
00:19:00:06 – 00:19:23:11
Max
Yeah. I mean, I’ve heard you mention in the past that phenomenon called liar’s dividend, where someone can get away with lying by just saying, Hey, that’s fake news. And the media attempts to expose the lie and then it backfires and only makes the lie sound even more credible. So the instigator of the lie subsequently becomes the benefactor of the outcome of it.
00:19:24:10 – 00:19:49:20
Nina
Exactly. And that happened even before Deepfakes, right? Really? Because we know who who coined or who kind of lobbied the term fake news into international stardom, shall we say. And that individual was doing that long before deepfakes were even the scenario. But now to say, hey, that’s fake news, that’s fake, that’s not real, you have more plausibility, right?
00:19:49:20 – 00:19:52:03
Nina
You have more you have more credibility to deny.
00:19:53:01 – 00:19:57:20
Erika
Is it fair to say it’s like that old folktale, the emperor has no clothes? Remember that?
00:19:57:23 – 00:20:06:09
Max
I actually thought the book was called The Emperor’s New Clothes. And in that case, I guess the liars exhibited those of the tailor that sold sold The Emperor’s.
00:20:06:09 – 00:20:15:05
Erika
New masks that they taught many, many years ago. But regardless, I definitely can see how the liar’s dividend has been used in politics. Sure.
00:20:16:08 – 00:20:23:02
Nina
Yeah. I think just politics is going to become an even dirtier game. So I wouldn’t want to be a politician.
00:20:24:10 – 00:20:50:13
Max
Definitely not. Microsoft and Adobe are two companies now trying to authenticate media and train tech to recognize the inconsistencies that mark fake content. They’re always in this race against the Deepfake creators, though we’re constantly discovering new ways to avoid these systems. So how do we create laws when technology will have already evolved again before the law is even enforced?
00:20:51:07 – 00:21:11:02
Nina
Yeah, so I’m going to unpack that because I think there’s two ways I want. There’s two parts in my answer. So the first about the law is very, very difficult, right? Especially when you consider this unique kind of juncture in history that we’re at where we have exponential technology. And this technology is coming faster than any technology ever known before.
00:21:11:02 – 00:21:38:13
Nina
We’ve already seen the pace of technological change over the past 30 years, since kind of the advent of the Internet and the proliferation of the Internet, smartphone and cloud, how quickly everything has changed from our personal lives to the labor market, to the economy, to the world. And I think AI is going to be many magnitudes more significant in terms of its impact, not least because I think the technology is more capable, is accelerating quicker.
00:21:39:00 – 00:22:04:13
Nina
And I would actually say it’s going to be adopted even quicker. So if you’re a lawmaker, Barry, very difficult to kind of conceive of the right frameworks to make the regulation for a world that’s going to transform so quickly. So I guess you have to think about it from a principles viewpoint rather than and mitigating risk and kind of also tapping into opportunity.
00:22:04:13 – 00:22:41:08
Nina
But we could spend 3 hours kind of talking about the right way to from a regulatory approach. It’s going to be difficult. The second part, which is about authentication and this is really interesting because when I first started getting into the world of Deepfakes, our initial, you know, the very, very early days, we initially or the community who was engaged around this initially thought, well, what we need to do is to build a deepfake detector, so we need to get AI to detect AI so we know, so the technology can tell us when something is synthetic, when it’s generated by AI.
00:22:41:15 – 00:23:10:14
Nina
In practice, that’s very difficult to do because it’s always an adversarial race, right? So just when your detectors can detect some AI content, the kind of generators of that AI content, figure out how to beat the detectors and yada, yada, yada. And there may even be a point where the generators are so sophisticated that you can the detectors can no longer figure out in the DNA of something whether it’s generated by air or not.
00:23:10:14 – 00:23:30:11
Nina
So it might just be a fool’s errand. Secondly, you can never build a detector that’s one size fits all because there are so many different forms of air generated content, so many hundreds of thousands of different models. So if you think you can just feel like this is the one size fits all AI content detector, well, it’s just never going to work.
00:23:30:19 – 00:23:59:01
Nina
And finally, because of the way these detectors work, which is just that they give you a probability how likely something might be generated by air and not there’s always a chance for a false negative or a false positive. So yeah, we’re 90% sure that that’s an AI generated video, but there’s always that 10%. So rather than thinking about the detection approach, which is one approach and should be, you know, I’m not saying that that approach should be dropped.
00:23:59:01 – 00:24:05:20
Nina
It’s just kind of one line of defense, a more kind of strategic and fundamental way to think about it.
00:24:06:10 – 00:24:06:18
Erika
And.
00:24:07:17 – 00:24:52:18
Nina
Is to build transparency into the very kind of architecture of the Internet itself. So you talked about authentication, and it’s really important question to talk about, because when initially deepfakes started emerging, there was increasingly a growing sense of, okay, so the early kind of pioneers in the community who were looking at this form of AI generated content and how it might impact information integrity and the potential existential fear was like once AI generated content starts proliferating, how will we determine what’s synthetic or what’s authentic?
00:24:52:18 – 00:25:14:17
Nina
How are we going to know if it’s AI generated? Not initially. Minds were turned to detection as a kind of silver bullet to solve this problem. So we will build an AI detector that will be able to figure out which content is made by air. And so we’ll be able to distinguish between synthetic and authentic. It sounds good in theory, but in practice it’s actually really, really difficult.
00:25:14:19 – 00:25:45:14
Nina
First, because this is always going to be an adversarial game as soon as you build an AI detector, which is able to detect AI generated content, the generator will get better and will be able to beat the detector. So this is always this adversarial game. And hypothetically, there will even come a point where the generators become so sophisticated that it is simply not possible for an air detector to detect something in the DNA of a piece of content that suggests it’s synthetic or ad made.
00:25:45:14 – 00:26:18:14
Nina
So at that point, the game is up. Second, because there are so many models and mediums and ways now of generating air made content, there is no single one size fits all detector. It’s just not possible to deploy, you know, your magic content detector in the wild and they’ll be able to detect all air made content. Third, because of the way these detectors work, they can only give you a percentage of how confident they think that that piece of content is made by air.
00:26:18:15 – 00:26:51:21
Nina
So we are 90% confident or 70% confident that this video is generated by AI. So there’s always a chance of a false negative or false positive. You know what? If you’re in the 10% chance that it isn’t detected by air, what if you’re in the 30% chance that it isn’t generated? So it becomes very, very difficult to deploy at scale, and it’s not enough of a solution When you think about safeguarding information integrity, which is going to become a very important question going forward for the entire information ecosystem and the entire digital ecosystem.
00:26:51:21 – 00:27:32:05
Nina
So a second approach then, if detection isn’t kind of the silver bullet solution, although all remain kind of one form of resilience building, is this idea of content credentials or authentication. And this is the idea of embedding kind of transparency into the core architecture of the Internet itself so that anybody, any consumer or any organization, any actor online can see the provenance of where the content and the information came from so that they can make their own trust decision based on that contextual information.
00:27:32:13 – 00:27:55:15
Nina
So you’re not in the game of saying this is true or this is not. Moreover, given that so much content is going to be synthetically generated or made by AI, which is not malicious, it’s not kind of malicious deepfake content, it’s legitimate. It it is really important that we can demonstrate the origins of AI made content, which is not malicious.
00:27:55:15 – 00:28:21:15
Nina
So when you think about authentication, this is broadly the idea that at the point of creation you seal something in the DNA of that content, whether it’s made by AI or not. So it’s kind of a cryptographic hash which shows how that piece of information or content was made. And this is much more than a watermark. A watermark can kind of be removed or edited.
00:28:21:21 – 00:28:50:18
Nina
This is, like I say, in the DNA of that content. It’s called secure capture or secure signing. It’s there’s different ways to do it. You can either do it with a cryptographic hash, you can do it on a blockchain. I think the former is better than the latter, but it’s not enough to just kind of have this information about the context of information, the content of information in the kind of DNA of content, because that might be there, but you need to be able to see it too, right?
00:28:50:18 – 00:29:28:15
Nina
You need people to be able to see that nutritional label wherever they encounter content online. So the next step is about developing an open standard to actually break into the architecture of the Internet, the infrastructure, so people can see that nutritional label pop up when they need to. And that open standard is actually already being developed by a nonprofit organization called the C2, PR, of which Adobe, Microsoft, Intel ARM, Truepill, a company that I advise which does kind of the secure capture hashing technology, are all founding members.
00:29:28:23 – 00:30:04:02
Nina
And what’s really interesting is that some of the biggest kind of generative AI companies have now committed to signing their AI generated content and becoming adherent to that open standards. So I think the way, again, this is going to take some time because it’s about reimagining the very kind of backbone of the architecture of the Internet ecosystem. But I think the way this is going to evolve is that there is going to be one version of the Internet where you can see content credentials, where you can see transparency about the information on the content that you trust.
00:30:04:02 – 00:30:21:13
Nina
Does that mean that everything online will have content credentials? No. But then you can make a trust decision on Do you want to trust a video that has no content credentials and you don’t know where it came from? You know, arguably not, because if you’re a good player, you would want to be transparent about the origins of your content.
00:30:22:08 – 00:30:34:18
Max
So I guess it’s almost like trusted maybe is a web3 type of type of browsing engine versus a web too, that you can kind of make your own judgment call and kind of take everything with a grain of salt.
00:30:35:09 – 00:30:59:14
Nina
Exactly. And it’s about essentially it’s about giving people the tools to be critical without becoming cynical, because once you tip into cynicism, right. And you don’t trust that anything is real anymore. Well, that’s just that’s pretty bad, both from a political perspective, but also from an economic perspective. And, you know, you’re all business people and, you know, you need to have access to trusted information.
00:30:59:14 – 00:31:09:24
Nina
Trusted content can’t make decisions otherwise. We talk about this in a political context, but it’s just as important in a personal context and in the enterprise and business context, too.
00:31:10:23 – 00:31:24:01
Max
Right? So with the ongoing heated discussions on AI’s broad use without technological or really legal safety, what is your outlook on the future of humanity in the era of AI?
00:31:25:11 – 00:32:02:21
Nina
So today we’ve kind of done a bit of a deep dive on deepfakes, right? Which is one kind of form of generative AI, and it’s a form of generative AI, which is, you know, usually used for malicious use cases. But more broadly, when you think about AI and you guys been getting a little bit of a bad rap in the press these days because the media narrative that has been dominant almost every single day has been the so-called old kind of AGI scenario, the scenario when AI takes over and, you know, there’s an existential or extinction risk because we humans have lost control of the AI.
00:32:03:05 – 00:32:28:17
Nina
And that narrative has just been played over and over and over for the past six months. Now, I have to say that this is a speculative scenario which is hypothetically possible, just like it’s hypothetically possible that, you know, an asteroid will crash into Earth and there’ll be a big extinction event that way. But it would be remiss of me and it would be irresponsible of me not to discuss the other side.
00:32:28:23 – 00:33:02:11
Nina
Yeah. So we talked about kind of deepfakes, which is the malicious use of AI, and we’ve talked about kind of the extinction risk, which you kind of see in the media every day. But AI is potentially a technology that has the potential to uplift humanity in ways that we just have not been able to conceive of before. One, because, again, if you think about generative AI and people don’t associate charge in beauty and deepfakes in the same sentence, but they really are manifestations of that same type of air right?
00:33:02:12 – 00:33:48:00
Nina
AI that can create AI, that can generate. But generative AI is going to transform all knowledge work forever, all kind of human creative and intelligent work can now we can think of generative AI as a kind of copilot to assist human endeavor in this aspect. So there’s going to be tremendous abundance created by artificial intelligence. There was a report that came out from McKinsey just last week that looked at potential productivity gains through generative AI in just 63 different areas, and they quantified that it could add up to $4.2 trillion to the economy annually, okay, just per annum.
00:33:48:00 – 00:34:30:09
Nina
So there’s going to be a tremendous amount of abundance, productivity and money to be had to kind of the enterprise applications of AI. But it’s not only that if you start thinking about it as a medium for knowledge and research, so very fascinating and interesting things happening, for instance, in biotech where generative models are now being used to discover new drugs, to create new medicines for things like cures like cancer because of the way that it can just expedite the process of drug discovery, something that would have taken human researchers decades to do now being expedited down to hours or days.
00:34:30:23 – 00:35:01:01
Nina
So I’m really excited about the possibilities for cures for illnesses, really excited about the economic abundance, really excited about the potential for AI to combat aspects of climate change. Again, one of the most promising things I’ve seen is the development through the use of AI models for an enzyme to basically eat at plastics. So to kind of imagine what that could do for kind of the plastic waste you see in our oceans.
00:35:02:02 – 00:35:30:06
Nina
And ultimately for me, when you talk about this technology’s ability to change humanity, the big point is this We’re at this incredible juncture in our lives where the pace of exponential technology is going to be far more profound than anything we’ve experienced before. So we will experience more tech led change than the entirety of humanity that came before us.
00:35:30:21 – 00:35:56:03
Nina
Now the top line is transformation, transition and change. And that can be scary. And there’s no doubt that technology is as powerful as this will be weaponized and used maliciously. But that is just a story to me that’s as old as humanity, you know, ever was it thus. But of course, the tools keep on becoming more powerful. But I don’t think we’re at the point where we’ve lost our agency.
00:35:56:03 – 00:36:21:13
Nina
So this narrative that the AI is taking over and we’ve absolutely lost our agency and we have no say, I want to really push back strongly on that because we do have a say and we have this unique moment in history, I would say, in the next decade, to lay down the foundations to decide how this technology is going to be integrated into the framework of societies.
00:36:22:06 – 00:36:22:23
Max
So there’s hope.
00:36:24:00 – 00:36:26:07
Nina
There’s hope. There’s a I am the optimist.
00:36:26:07 – 00:36:28:07
Erika
Yes, that’s great. You know.
00:36:29:23 – 00:36:55:04
Nina
I think rather than being concerned about, you know, the air is going to kill us, the more important question is who controls these systems and to what ends and how are they deploying them? Because, again, ultimately, this is a story about humanity, right? This isn’t a story of us losing our agency because some kind of autonomous computer now has all the control.
00:36:55:10 – 00:37:34:20
Nina
It’s still people, an organization and well-resourced actors in nation states who have the control. So for me, the more interesting story and perhaps the more pivotal point as to how this ends up going will be okay, how do we ensure accountability? How do we ensure transparency? How do we ensure safe frameworks for ethical, fair, nondiscriminatory use? Those are all the age old questions that we’ve been asking ourselves in society anyway, which is truth to power, accountability, and making sure that all kind of the resources and the abundance are not monopolized by a few well-resourced players.
00:37:36:10 – 00:37:37:05
Max
Well said.
00:37:37:20 – 00:38:00:00
Erika
Yeah, very interesting. Well, that brings us to our game time. Oh, yeah. Yeah. So for this game, we wanted to this or that for our audience to get to know you a little better. So I will ask I will ask you two options and you say your first choice. Are you ready? I’m ready. Let’s go. Texture call.
00:38:01:05 – 00:38:09:12
Nina
Oh, text. I hate it when people I just I it offends me when people call me without texting me. Only my parents are allowed to do that. And my husband.
00:38:09:20 – 00:38:11:07
Max
And then they leave you voicemails.
00:38:11:09 – 00:38:11:24
Erika
Yeah. It’s like.
00:38:11:24 – 00:38:18:19
Nina
Oh, my God, my dad loves leaving me seven minute voicemails. I just I told him today I don’t listen. That’s like just text.
00:38:19:10 – 00:38:23:06
Erika
We almost have these conversations with our parents. Yeah, it’s no.
00:38:24:24 – 00:38:26:18
Nina
Bitch bitch.
00:38:27:02 – 00:38:33:16
Erika
Outspoken or diplomatic. Mm.
00:38:33:22 – 00:38:49:14
Nina
I think there’s a moment for each right and love a little bit of diplomatic kind of maneuvering, very sophisticated. But sometimes you just got to say it right. So I think there’s a moment for each approach.
00:38:49:14 – 00:38:51:22
Erika
Yeah, Twitter or Facebook.
00:38:53:16 – 00:39:05:13
Nina
Neither. Okay, I quit Facebook. Yeah, I quit Facebook a long time ago. And Twitter is just a dead zone these days. So I’m still waiting for the rejuvenation. I’m waiting for it to be great again.
00:39:06:13 – 00:39:16:03
Erika
Meghan Or Matrix? Meghan Meg. Meg That movie that just came out, the the AI Nanny or there was like a little girls protector.
00:39:16:04 – 00:39:21:08
Max
Yeah, like a companion. That’s pretty scary, actually. I saw the trailer. It’s kind of creepy.
00:39:21:20 – 00:39:38:15
Nina
Okay, sounds rubbish. And it sounds like another, like, kind of air doomsday thing. Okay. I haven’t watched Meghan. I haven’t seen the trailer, so I’m going to go for Matrix because that is just one of the all time greats. And it’s from the Is it from the nineties or just it’s from is it from the air? Yeah.
00:39:39:01 – 00:39:41:16
Max
I think the first one was probably like late nineties maybe.
00:39:42:00 – 00:39:45:10
Nina
Yeah. You know one of the best decades. So I’m going to go for the Matrix.
00:39:46:06 – 00:39:48:03
Erika
Always early or right on time.
00:39:50:13 – 00:40:06:03
Nina
Oh I am so obviously now that I have a very busy speaking schedule, I have to always be early, but I think my natural proclivity would almost tend towards a little bit late. So I have, I have some Asian jeans in me.
00:40:06:23 – 00:40:07:02
Erika
Where.
00:40:07:02 – 00:40:12:11
Nina
Last, but the other half is German and we’re extremely strict about our timekeeping. So I guess right on time then.
00:40:12:24 – 00:40:15:17
Erika
Okay. Yoga retreat or music festival?
00:40:16:04 – 00:40:19:08
Nina
Oh, yoga retreat, definitely no doubt.
00:40:19:17 – 00:40:21:15
Erika
Fresh fruit or fresh flowers.
00:40:23:08 – 00:40:31:22
Nina
Oh, tricky. Both. Both. Both. We have a lot of fresh fruit in the house because I have young kids who just. Just eat so much.
00:40:32:07 – 00:40:33:22
Erika
Yeah, I’m like, I want some.
00:40:33:22 – 00:40:39:08
Nina
Blueberries, too. But there’s never any left but love the fresh flowers. Yes.
00:40:40:14 – 00:40:41:16
Erika
Mac or PC.
00:40:42:22 – 00:40:51:18
Nina
Mac, No doubt. Although it’s interesting how Microsoft has suddenly become the sexy tech company and there’s a lot to do with their investments.
00:40:51:18 – 00:40:56:02
Max
And I yeah, I was going to say once, once they acquired Chargeability so sure.
00:40:56:19 – 00:41:04:00
Erika
Yeah. TED Talk or Masterclass. TED Talk still TED Talk.
00:41:04:14 – 00:41:22:20
Nina
They’re free. It’s accessible from Masterclass. You have to have a subscription. And I have to say that when I have subscribed, I was a little bit there were amazing masterclasses, but I was like, I just want more. That just, you know, there wasn’t as much content as I hope maybe that’s changed recently. Nothing beats the TED talk.
00:41:22:20 – 00:41:25:12
Erika
The aisle seat or window seat.
00:41:27:09 – 00:41:32:19
Nina
I’ll just because if you have to go to the bathroom I hate be like.
00:41:33:09 – 00:41:36:21
Erika
Or excuse me we are going to yeah no.
00:41:36:24 – 00:41:38:24
Nina
Definitely just for that reason Yeah.
00:41:39:15 – 00:41:46:15
Erika
Cat or dog? Dog cats are just moody, aren’t they? If, if.
00:41:46:15 – 00:42:09:05
Nina
I have to have a pet, which is, you know, I love, I love animals. I grew up with pets. I grew up with dogs actually, and a few cats. But, you know, it’s a committee mint. So I would like my pets to be appreciative and, you know, be like making me feel good, like they want to spend time with me rather than a sassy cat who just just wants to leave.
00:42:09:06 – 00:42:09:13
Erika
Or just.
00:42:09:18 – 00:42:12:08
Nina
Go away. Human. Yeah, definitely a dog.
00:42:12:08 – 00:42:13:18
Erika
No attitude is nice.
00:42:13:19 – 00:42:17:16
Nina
Yeah, exactly. Although you don’t have to give the cats walks. I guess so.
00:42:19:07 – 00:42:25:08
Erika
Yeah. Rock and roll. Or German techno. German techno.
00:42:26:12 – 00:42:31:02
Max
What is German techno and how is that different from regular techno or is all techno German?
00:42:32:20 – 00:42:34:08
Erika
Oh, German.
00:42:34:08 – 00:42:56:00
Nina
German, techno. Well, they’re just like a pioneer in techno and German. If you like that kind of music. You definitely want to be going clubbing in Germany. Yeah. Yeah. It’s, it’s, it’s like, I guess like Detroit in the US would be like a similar level of, like house and techno and. Yeah, yeah. German techno.
00:42:56:11 – 00:43:06:15
Erika
You just like, listen to Barry Manilow. Not that but we listening to some good It’s nothing wrong with Barry Manilow Is that the good music.
00:43:06:15 – 00:43:08:11
Nina
You’ve been listening to, too? Barry Manilow?
00:43:08:19 – 00:43:09:03
Erika
No.
00:43:09:15 – 00:43:10:10
Max
She’s just about.
00:43:10:22 – 00:43:11:02
Nina
Just.
00:43:11:02 – 00:43:20:24
Erika
See them all the time. But yeah, it keeps our marriage going. It’s super. Hearing or supervision. Supervision.
00:43:22:13 – 00:43:30:09
Nina
Supervision. What would that you need to clarify? But so what could you do is super hearing. You could just like listening to any conversation in the world.
00:43:30:18 – 00:43:31:20
Erika
Yeah pretty much.
00:43:33:01 – 00:43:35:22
Nina
No supervision give you you can see.
00:43:36:09 – 00:43:38:16
Erika
See anything. Yeah.
00:43:40:08 – 00:43:45:01
Nina
And if you have supervision can you also hear I think or you just kind of. Yeah.
00:43:45:01 – 00:43:50:21
Max
But you know I think you can hear. Yeah. But not super. You can’t have super hearing then.
00:43:50:21 – 00:44:01:08
Nina
It has to be supervision because you can still hear but you can also see and that can help you infer perhaps what you’re saying, although it might all be.
00:44:02:02 – 00:44:03:10
Max
May all be fake content.
00:44:03:10 – 00:44:06:00
Nina
Yeah exactly. It’ll just be deep concept.
00:44:07:06 – 00:44:13:20
Erika
Concert or movie movie dine in or takeout.
00:44:14:16 – 00:44:18:04
Nina
That it my husband’s very good cook if he’s cooking in.
00:44:18:16 – 00:44:25:20
Erika
Nice dress up or down dress up this up. Although, you know, after COVID, I was like.
00:44:25:20 – 00:44:29:01
Nina
Oh, just I just want to live in Leisurewear.
00:44:29:04 – 00:44:32:08
Erika
Like, you know, like Sweaty Betty.
00:44:32:08 – 00:44:34:00
Nina
Leggings for the rest of my life.
00:44:35:12 – 00:44:36:09
Erika
Uber, Lyft.
00:44:38:09 – 00:44:48:04
Nina
Uber, just because we don’t really have lived here in the UK, or maybe we do, but Uber is the big thing here, although it seems to be declining in it’s like quality.
00:44:50:07 – 00:44:51:11
Erika
Pearls or diamonds.
00:44:53:23 – 00:44:59:13
Nina
Uh, maybe one good diamond and then other pearls.
00:45:00:20 – 00:45:02:10
Erika
There. Walk or run.
00:45:03:18 – 00:45:07:23
Nina
Walk. Definitely not. Not not into running. No, no.
00:45:10:00 – 00:45:10:23
Erika
Planet or wing it.
00:45:13:12 – 00:45:29:05
Nina
Oh, good. Time for each, I guess, because sometimes if you plan it, it’s just too, you know, manicured. Sometimes there’s a little bit of room for just winging it. Great things can happen, but of course, great planning can lead to great results.
00:45:30:12 – 00:45:32:04
Erika
Zoom call or in office.
00:45:32:15 – 00:45:43:14
Nina
I have to say I do love the Zoom call for helping me connect to people all around the world. But there is something about just meeting people in person, right? It’s so much better.
00:45:44:24 – 00:45:45:23
Erika
Singing or dancing.
00:45:47:09 – 00:45:49:08
Nina
Dancing to the German techno.
00:45:49:20 – 00:46:00:13
Erika
And of course, yeah, we have to check that out. A playlist or a podcast? A playlist playlist.
00:46:01:02 – 00:46:06:09
Nina
I’ve been listening to some good nineties playlist recently, The decade of my Youth.
00:46:07:02 – 00:46:09:18
Erika
You lose your keys or lose your phone.
00:46:11:16 – 00:46:22:17
Nina
I’m doing both all the time, actually. I’ve been pretty good about not losing my phone for the last few years. But keys, that is just. Yeah, that’s a that’s a low point for me.
00:46:23:24 – 00:46:50:12
Erika
Sunrise or sunset? Sunset. Know it all, sunset. Know it all or have it all. Have it all. Nobody likes to know it all. But be a handful Have it all humble Have it all right. Fame and fortune or love and wisdom. Love and wisdom. Any day. Oh, this was fun. Thank you for talking to us today.
00:46:51:00 – 00:46:52:20
Nina
Thank you so much. Thank you for having me.
00:46:53:03 – 00:47:10:23
Max
Yeah, thank you. If you want to learn more about generative AI, please pick up Nina Schick’s book, Deepfake. And if you want to learn more about how AI is being implemented in business or about any other new tech, visit Mettel.net, or contact your MetTel sales representative.