Episode 38
Exploring the Convergence of AI and Human Ingenuity with Matthew Blakemore
Matthew Blakemore, an AI strategist and entrepreneur, shares his journey from the fashion industry to the world of artificial intelligence. He discusses the challenges of protecting intellectual property and the need for more support for startups in Europe. Matthew also highlights the importance of aligning AI with business strategy and the potential business applications of AI in various industries. The conversation covers various AI-related topics, including the value of business plans, the future of AGI, the limitations of AI and human intelligence, bias in AI, interacting with AI, and upcoming speaking engagements.
Takeaways
- Startups face challenges in protecting their intellectual property, and more support is needed regarding cost and legal barriers.
- The UK startup scene has been impacted by Brexit, with some companies moving to other European cities to access the single market.
- Businesses should carefully consider their AI strategy and choose the right form of AI that aligns with their business goals.
- AI has various applications in different industries, and businesses can benefit from productivity gains and improved customer experiences.
- The implementation of AI should be responsible and ethical, considering issues such as bias and data privacy. Business plans are valuable for the thinking process they require, not just the document itself.
- The future of AGI may involve a federated model with specialised AI systems working together.
- Human intelligence still surpasses AI in many areas, and there will always be things humans can do that computers can't.
- Bias in AI is a complex issue, and companies should address biases in their own practices rather than blaming AI models.
- Interacting with AI in a polite manner may not always yield the best responses, raising ethical concerns.
Links relevant to this episode:
Thanks for listening, and stay curious!
//david
--
Tools we use and recommend:
Riverside FM - Our remote recording platform
Music Radio Creative - Our voiceover and audio engineering partner
Transcript
00:00 - Voiceover (Announcement)
The Creatives with AI Podcast the spiritual home of creatives curious about AI and its role in their future.
00:08 - David Brown (Host)
Hello everybody, welcome to the Creatives with AI Podcast. I'm your host, David, and on today's show we have an excellent guest, matthew Blakemore. Matthew is an award-winning tech visionary, entrepreneur and strategist in artificial intelligence. He boasts over a decade of experience in propelling products from ideation to delivery and commercialisation. He celebrated for his thought leadership and dynamic public speaking, and he continuously sheds light on the expansive realm of AI and digital transformation. When I was researching Matthew, I could probably talk for another six minutes about all the stuff that he's accomplished and the things that he's done and the places that he's worked, but I think it's best if we just get straight into the conversation and I'll let Matthew give a little bit of background on himself. So, Matthew, welcome to the podcast.
00:57 - Matthew Blakemore (Guest)
Well, thank you so much for inviting me on to your podcast and thanks for the great introduction as well. I really appreciate that.
01:03 - David Brown (Host)
I literally don't have time to read everything that's on your LinkedIn. That would be the whole show.
01:09 - Matthew Blakemore (Guest)
Does anyone have enough time to go through all that? Yeah, no, I think it's. You know, I'm really interested to kind of have this discussion with you today and you know I've heard amazing things about you and your podcast as well, so I'm really excited to be part of it.
01:24 - David Brown (Host)
Thank you very much. I must send whoever that is a check. So I think I got introduced to you. I mean, actually we've been running in very similar circles in the London startup sort of ecosystem I guess you could call it for quite a few years actually, which we only just realized the other day when we did the intro call. So that was quite interesting.
01:47
But I was introduced to you specifically, I think, through Wynn, who is at the bridge who works for the government on the bridge AI program, and yeah, it just seems like you're an ideal person to talk to about some of this stuff. Yeah, you can hear all this stuff downstairs. Sorry about the background noise I'm in home people but yeah, if, maybe, if you could just start quickly and give some highlights of the things that you think are most relevant in your background and then we'll just sort of go from there.
02:17 - Matthew Blakemore (Guest)
t up my first company back in:03:24
We won University of Hertfordshire's Flare Award, which is their annual business award. I'm actually helping judge this year, so I'm quite excited to be going back and seeing all the exciting companies. And then we went through the London Accelerator Academy, which was brilliant, met so many great investors and advisors as well. It's run by Ian Merricks and we actually had a mentor, damon Bonser, who's the CEO of the British Design Fund, and his wisdom and support was fantastic and he managed to get us meetings and support from Arcadia Group, house of Fraser and other retailers. So, you know, really enjoyed that experience.
04:02
I guess the thing that did it for us in the end on that project was Brexit, sadly, because not because it directly impacted us, but because it directly impacted our customers. So the retailers because of the uncertainty, the fashion retailers were already struggling a little bit, and then the uncertainty caused by that vote meant that a lot of them unfortunately went bankrupt quite soon afterwards, and so I carried on pushing it as any entrepreneur would and trying to drive it forward, but it seemed that every deal we were doing was with a retailer that eventually went bust. So it was pretty tricky times.
04:38 - David Brown (Host)
Which is quite interesting that you say that, because I don't think a lot of people realise the impact that that really did have and I think you know there was a obviously there was a huge controversy over whether it should have been done and all of that at the time. But I think what's happened in the background is is a lot of businesses have struggled significantly with it. You know, I even know people who even small business people. There's a guy I had on a couple of weeks ago who runs a small reggae sort of radio station and was a DJ and he basically lost almost all of his business because it came from mainland Europe and as soon as Brexit hit and the rules started to change, he couldn't afford to do business there anymore and no one no one would buy from him because he had to. You know, he had to up the prices and everything else and all the customs rules and everything.
05:26
So, yeah, it's really interesting that you know that. You found that as well and I think there's a lot of. I don't know that we'll really know the full extent of it for another 10 or 15 years until you know we'll get some MBA students, we'll go in and do some really good research around it and something will come out. But yeah, yeah, it sucks.
05:47 - Matthew Blakemore (Guest)
Absolutely. I mean it was. It's been tricky for a lot of businesses. I mean we were. We were, I guess, in a more advantageous position than some because we weren't trying to trade overseas at that point. You know we were, we were just really working with UK retailers. But certainly I know several startups that were selling China sell products or SaaS products abroad that certainly found increased barriers and uncertainty tricky to their business as well. So yeah, it created a really uncertain environment, as we know, and that kind of went on for quite some time and and yeah, it did it did negatively impact the startup scene Definitely.
06:30 - David Brown (Host)
Sorry, I was on mute and so how did you move from there? How did you get into? Because I know you've been doing more stuff around AI and I think you've got some, even some patents and stuff around some AI technology. So how did you get from fashion to AI?
06:47 - Matthew Blakemore (Guest)
Sure, so when that looks good on me.
06:50
We actually worked with Tom Mason, who's now the CTO of stability AI and he was effectively our CTO at that point, and we worked with him to develop an algorithm off the back of cloud sites visual recognition technology which was to identify the clothing items people were trying on in store and actually be able to suggest other items in store from the stores inventory, so giving them the more connected experience that you experience online when you're looking at clothes, for example.
07:22
And so building on that, you know, understanding how to develop an algorithm, understanding the work that goes into to visual recognition technology and the, the partnerships we built in in that looks good on me project really kind of set in motion my interest in artificial intelligence.
07:43
And you know we had great, really interesting meetings with Amazon at the time about incorporating the work that we were doing into the, into the echo, so into the echo devices, so putting a camera in them. Now I don't know if Amazon were already working on it. We had an NDA in place, but several months after our meeting they launched a project called Echo Look, which was to provide fashion advice using a camera in the echo device. So I'm not sure if I'm not sure and I can't possibly say if they copied our algorithm directly, but what I will say is is, months prior to that, we actually shared with them our proprietary algorithm under an NDA to say look, this is what we're doing, we think it, we could work with us to do this within the echo, so I'll leave that listeners to decide what happened there.
08:35 - David Brown (Host)
It's tricky, that isn't it. It's really tricky because I will. Companies shall remain nameless, but I have worked for two separate companies startups in the past that went to that joined the Microsoft partner program and wanted to partner with Microsoft because the technology would fit very well into other tools that Microsoft had.
08:58 - Matthew Blakemore (Guest)
Yes.
08:59 - David Brown (Host)
Yeah, part of the agreement that Microsoft does with companies like that is, before they'll integrate any of your code or do anything with you, you have to put the code in escrow, because they have to check it for security, which is their position, that they have to really investigate it in detail because they need to make sure that you're not putting any backdoors in and all that sort of stuff.
09:19 - Matthew Blakemore (Guest)
Yeah.
09:20 - David Brown (Host)
And then, miraculously, six months later, microsoft rolls out a product that looks identical to the one that you've been trying to pitch to them. Now I'm not saying I'm not saying for sure that Microsoft are stealing people's IP, but when it happens more than one time, it starts to feel a bit suspicious. So I can empathize with you because I've had the same thing happen with big companies and I know, as I know. We're talking a lot about startups, people, but we will get around to AI eventually. That's one of the things they say today. It's like everybody's like don't send me an NDA to sign because I'm not going to sign it, because who cares, it doesn't matter, no one's going to copy it. But actually, when you do go to some of the larger companies, I think if you do have a really good idea and something that they genuinely think that they can implement, you do have to be careful, particularly if it's a new IP and something that's maybe not tremendously difficult to do. Yeah, yeah.
10:18
They can just go. Yeah, but we've been working on a project like that for ages and like what are you going to do?
10:25 - Matthew Blakemore (Guest)
I mean, I guess the analogy that I would use when you're running a startup is you're a little bit like an antelope being chased by a group of lions.
10:34
And it's not just big tech, it's also, you know, when? I don't know if they're still around, but when we were working on our projects, there was also a company called Rocket Internet that had a pretty dodgy reputation for effectively taking ideas that were working well in some countries and then rolling them out with a lot of money behind them in other territories, before the original company could actually expand there, which I would consider unethical. But, but you know, it was a business model that seemed to work very well. But there was a lot of things like that where it was very tricky, because I remember, actually at the time as well, there was, there was a book that came out from a tech leader and I can't remember his name for the life of me, but it was effectively arguing within that book that companies like Apple I think it used Apple as an example was so vast and so large that you could give them a brilliant idea and they wouldn't be able to implement it. Well, I think that's not true.
11:33
It's really not true, but that was actually used. In a lot of presentations I went to when investors were speaking they were saying you know, startups are too worried about their own IP and actually I think what we've seen. You know there has been a lot of court cases in the US about a particular investment fund. It was called the Alexa fund, run by an arm of Amazon.
11:54 - David Brown (Host)
Right, okay.
11:56 - Matthew Blakemore (Guest)
Where they do actually have evidence that IP was stolen through that fund. So there's things like that where you do have to be extremely careful, because big tech both offer opportunities to startups by providing them with cloud services and things which they couldn't possibly develop on their own, but at the same time, if you have a fantastic product that they think can earn money, then, unless they acquire you, there's a possibility that you will find yourself on the end of them taking your idea and implementing it themselves.
12:34 - David Brown (Host)
And I suspect that this is only going to get worse, because I know there are some AI platforms out there today where you can say you know, write me a UI that looks like Twitter and it will literally go and create the entire, all the code for the UI to look exactly like another product, and it's lowering the bar. You know that used to be. You used to have to have a lot of skill and a lot of knowledge and time and resources. I mean, okay, if you were a software engineer, you could sort of sit in your bedroom at night and work on it as a side project if you wanted to and you could. You know you could eventually build something, but it took a long time and you had to work those extra hours and you had to do all that and you had to have a lot of specialist knowledge, whereas today, with all of the AI tools that are out there, you don't need that anymore. No, no.
13:28 - Matthew Blakemore (Guest)
No, you don't. It's true, and as these generative AI products improve as well, over time, it's going to become easier for others. I mean, it's quite interesting. Actually, while we're talking about this particular topic of generative AI and also copying other people's ideas, I don't know if you saw one of the main issues that's occurred with the new open AI GPT store, where, effectively, someone has launched a GPT products that they spent a long time creating and then people have just simply copied it and launched it under exactly the same name or a different name and claimed it as their own, and so you've got the store now with multiple versions of exactly the same tool and you're completely unsure who actually came up with it in the first place. So there is absolutely IP challenges in the industry, and as these tools get more and more advanced, as you say, and the barrier to entry becomes smaller and smaller, it's going to result in a lot of copying and the original innovators potentially losing out.
14:29 - David Brown (Host)
Yeah, I was just. I was at a meeting the other day with Adelaugh Firm where we were talking about copyright and that was one of the examples that came up. So not only is it about the models being trained and all of that, but then it's if you develop a tool like one of those specialist tools that's in the store and then just the blatant copying of it, and it's almost like and my question became well, is it just basically a fancy prompt that somebody set up that it kind of has a background prompt and then you can ask it a question, but then it filters the results in a certain way and I'm like there's almost no way you can stop that. I guess in theory you could copyright, which is kind of how software goes right, like they copyright the code itself as text, and then that's how a lot of software is protected.
15:20
Again, we're getting way in the weeds for people. But that's okay, I can learn something, and I guess that's the only protection maybe that you could have is under copyright, that somebody's used your copyrighted prompt to then, you know, develop a tool that works. I don't know, man, it's really come. It's complicated, isn't it? I don't think we have the last to cover that really.
15:45 - Matthew Blakemore (Guest)
I'm not sure we do yet.
15:47
I mean also traditionally it's been very, very hard for startups to protect their inventions and it's something that no one really wants to talk about. But there are significant barriers both from a trademark perspective and the patent perspective and copyright perspective for startup companies. And the issues are you know, cost. For example, if you wanted to patent something, even in just the UK, if you want to get a patent lawyer on that, that's thousands of pounds Equally in the UK. Until very recently you really stood no chance whatsoever in in pacifying anything that was software related. They've just actually changed the law in the UK in the last month or so so that they are more in line with the US and they do allow now AI solutions, for example, to be patented. So they've kind of relaxed their view on software and an AI in that respect. So you can now at least file patents. But then there's still the cost implications of doing so, which startups will struggle with and that puts them at a disadvantage massively.
17:00
And then you've got issues, as I say, like trademark. So if you used to file a trademark in the UK, you used to get covered across the whole of the EU. You don't do that anymore because of Brexit. So you file a trademark in the UK, you only get the UK, and then it's thousands of pounds to get Europe, and then you obviously need to do the US as well, which is more money, and startups don't have this sort of funding to do that, and it was interesting that this week the EU have announced a massive pot of funding to support their SMEs and their startups, and actually they already offer startups support in securing trademarks, like massive discounts and things, if you're based in the European Union, and so people in the UK trying to set up companies at a distinct disadvantage at the moment, and there doesn't seem to be that that realization, because we've got a huge market on our doorstep but we're competing with companies in that market already and we need more support.
18:01
I think.
18:02 - David Brown (Host)
What was really to build. On that I totally agree with you, and I think the other challenge that we have in Europe sort of in all of Europe that we've had for a long time is also the attitude towards investment in innovation from the start, compared to the US. In the.
18:20
US. It's much easier if you just have an idea and a deck, if you just go out and pitch the idea, you could get. It'd be. I say, easy, but it's much easier. You could walk in and say, look, I need three to $10 million to get this idea off the ground. It's going to take three years to build it. Here's the plan. And they say, well, what have you got? And you go I've got nothing, I've just this is the idea. We've done the research but we need the money to get started and you could get that funded in the US but in Europe that would never happen.
18:49
Like in Europe, you have to build it. You might get some 150K sort of SEIS money, maybe a little bit extra EIS money to sort of help you along, but you get basically nothing when you start off until you've built up a customer base. You have a working prototype like you've got all that. You know you've got an MVP and people are starting to use it and you've almost proven product market fit. And then that's when the millions come and it's a totally different. It's a totally different ballgame and you know you're if you're trying to compete with someone from the US who has a couple of million pounds to start off with, and can afford to, to register those patents and those trademarks and everything. It makes it really difficult. It really does, yeah.
19:35 - Matthew Blakemore (Guest)
I mean, there's no coincidence that a lot of the companies in the UK that have reached unicorn status started off with very wealthy founders in the first place, because if they've got money to put in, then it gives them an opportunity to grow. And equally, actually I know some fantastic startups, not just in the UK but in Europe as well, that go to the US, to Silicon Valley, to raise their, to raise their own.
20:00 - David Brown (Host)
Yeah, yeah, I mean that's the that's. The other thing is, you know, get your very early start here, get your sort of get your shit together and then go to the US and, you know, join one of the big accelerator programs there, or you know something like that and really try and get your start.
20:18
What was the other thing I had? Another thing I wanted to bring up. Oh, the other thing that was interesting is so I go to these big shows like Mobile World Congress and there's also a Smart City World Congress that happens every year. So Smart City was in November and Mobile World Congress is in February, march time, yeah, and before Brexit I remember those shows were huge and used to go to those shows and there would be tons and tons of UK companies there.
20:50
You know the UK would have a big pavilion and they would. You know there'd be all sorts of companies represented there. And I've done that with a couple of different companies and been to those shows and you could walk around and you could talk to the different European companies and you could say, oh, I'm from the UK and you'd have a. You know, they'd be really engaged, you'd have a conversation. So about a year I guess it was after COVID, really so Brexit had really had time to sort of get entrenched in business and I remember going to the first show and walking around and first of all, there was maybe one UK company, yeah, and second of all, every place that I went to and tried to talk to them and said hey, I'm from the UK. Blah, blah, blah blah they go.
21:35
Yeah, we don't do business in the UK anymore and it was like really, cause we think it's this big market and we're the fifth biggest economy or whatever it is, and they're like, yeah, it's not worth the time and the money, but like, compared to the rest of Europe it's such a small market and now it's so much trouble to deal with it. We just we closed our office and we don't even deal there anymore. We have like one commercial person on the ground who works from their house, but that's it. And I was shocked at, actually, how many companies had said that that were big companies that I'd sort of worked with in the past and, yeah, people just sort of bailed. So it's a it is a difficult sort of position.
22:20 - Matthew Blakemore (Guest)
I mean. What we've seen certainly is cities like Amsterdam have hugely benefited from Brexit, because a lot of companies have moved there and they've done so for an office, and Dublin is another one that's benefited. Yeah, Dublin for sure you know companies want a European base. I mean it's interesting that you know recent oil can move to his hedge fund base to Dublin.
22:45 - David Brown (Host)
But let's not touch on that.
22:49 - Matthew Blakemore (Guest)
But you know people want access to the single market, right, and it's interesting that you know Northern Ireland is currently in a very advantageous position because it's got access as well and we've seen companies move to Belfast to access that benefit as well. So I mean it's obvious to many that you know having access to the UK market and the single market is better than just having access to one of the other.
23:12
So, yeah, I mean it's an interesting situation, put it that way, I mean.
23:16
What I have seen, though, is some fantastic work from London and partners, because I do a lot of stuff with them, and London and partners have been around for a long time.
23:23
They're actually funded by the mayor's office in London, and they go around and support businesses looking to come to the UK and encourage businesses to come to London to set up base here, to kind of employ people here, and what we've seen a lot of is startups and companies that have already kind of established themselves in the single market do want to come to the UK as almost a test bed for going to the US.
23:49
So the UK's become a bit of a kind of an intermediary, so if you can succeed in the UK, you can succeed in the USA, and, equally, we've seen businesses that are succeeding in the USA come to the UK first to then be a launchpad for going into the single market. So the UK is, as I say, acting as an intermediary, really, and as a test bed for companies from the US and companies from Europe to go across the pond. So we have benefited in that way, and, as I say, london and partners I can't recommend them enough. I mean, they're doing their third cohort now of Grow London, which is all about supporting startups and things, and they offer so much free support. Really, I just wish they'd been around, really when I was at my own company. They would have been a massive, massive help.
24:39 - David Brown (Host)
So what are you? Right? So we'll go back to AI again. We got totally off track again, sure, so going back to AI. So what are you working on now? Because you've got an AI company now, don't you?
24:50 - Matthew Blakemore (Guest)
Yeah, so basically for the last five years I've been working in the media and entertainment sector and I oversaw the digital transformation of a UK regulator that deal with age rating video content so they do all of the age ratings of cinema and DVDs and, increasingly, online material and so I was brought in to kind of product manage a lot of the delivery of a cloud system for them. And then, off the back of that, I ideated, invented two AI products and saw them through One of them all around delivering more value for customers. So instead of just delivering UK-based age ratings, delivering multi-territory age ratings, and that's the one that's got the patent in the US. And then the second was actually a really exciting project with Amazon and University of Bath, so a secured innovative UK funding, and we worked together on developing a multimodal AI system to recognize issues within video content, and by issues I mean violence, for example, but, and its severity. So where is this? Tools right now, like Amazon recognition, that recognize where violence occurs? It doesn't actually at the moment indicate any sort of severity level, and so what we were trying to do was something quite groundbreaking using visual recognition, audio recognition and speech recognition simultaneously to kind of come up with a judgment, and so we actually unveiled that last year in June at London AI Summit for violence, because we got it working for violence before that conference and it was really exciting to see the audience's reaction because it was a real step forward.
26:40
So yeah, having kind of overseen both of those projects, had the experience previously with AI, it looks good on me.
26:46
I kind of felt like now is the time to pull all of that expertise together, all of that experience together, all of my experience as well from working in AI standards so working for the British Standards Institute and the International Standards Organization for the last four years and developing the AI data lifecycle standards and contributing to the other AI standards as well that I could bring that expertise together and put it together with contacts that I've made along the way in a consultancy that offers businesses the opportunity to really work with us to understand how to build an AI strategy that's responsible and ethical and how to deliver an AI project. Because I think so many companies at the moment are exploring artificial intelligence. They, their boards, are putting pressure on them to kind of incorporate artificial intelligence into what they're doing, but they don't necessarily have that expertise or understanding about the complexities of doing so and the things they need to bear in mind and the things they need to set up to ensure, as I say, that the project is responsible and ethical.
27:55 - David Brown (Host)
That's a great point and it's really interesting as well because a lot of the stuff obviously this is creative with AI podcast, so we're concerned a lot with the generative AI and some of the new tools that are coming out and that sort of thing. But one thing that started to come up and this is what gives me a little bit of hope, I think is what's starting to now come out are the actual business applications for AI, and the business application for AI isn't writing blog posts, right, Like that's not where the value of AI comes in. I think the real, true value that we're gonna start to see from the more complex algorithms and the things that the new style AI can do are things like looking for violence but being able to be much more. What's the word? I'm looking for? Finesse, To add a lot more finesse to what at the minute is quite a blunt tool. And that's what you're saying is you can now you can add that finesse to it and you can say, yeah, there's some violence, but actually it's mild, or yeah, there's some violence and this is extreme, as opposed to just marking it all as violence or not violence.
29:21
And it was interesting because when I talked to Steve Dunlop.
29:24
The other day Steve raised a really good point about how companies are using AI voice tools to do things like like.
29:35
One of their customers was Sunglass Hut in the US and they have 14,000 locations, and so when they're doing audio ads, they use AI and a voice clone of the person who does the voiceover to read the 14,000 addresses, Because no human wants to read 14,000 addresses, Of course, and no one like they don't wanna pay someone to do it and it would be soul destroying.
30:00
I mean, obviously people did it in the past, but it's like soul destroying work that no one wants to do. So if you can have a tool that does something like that but then you still have the real, live person reading the copy itself to get the emotion in it, then that's a good business use for these new tools and I think you talking about this and being able to do the violence detection, and then obviously that can split off into tons more things you could do with porn you can try to understand is if you see a neck and breast, is it a mother breastfeeding or is it something else? And then it makes it much better to be able to add that sort of finesse into it, so another layer of context isn't it.
30:45
That's it Exactly.
30:47 - Matthew Blakemore (Guest)
And the interesting thing is that the World Economic Forum I think it was two weeks ago they were discussing this exact issue. So they had the AI leader from Meta there and he was talking about how they basically used as much of the internet as possible to train their current LLM models. But where AI struggles is actually with taking data from its visual environment. So he compared AI to a four year old and said actually a four year old can absorb 50 times more data from its visual environment than AI currently can, and that's where the gap is. If we're ever gonna get to AGI, we need AI to be able to understand a lot more from its environment, a lot more from analyzing videos and images, and so that's really where the gap is.
31:41
And obviously the problem is that analyzing video and analyzing images and things is actually uses a lot more GPU power, and so the Sam Altman was pointing out that really to make the strides we need to, we need a breakthrough both in terms of GPU chips, which may come this year, may come. There are new GPU chips coming. Whether they'll be powerful enough to do everything we want, I don't know, but there's progress being made but then also in a new energy source, so that's very interesting.
32:14 - David Brown (Host)
What happens when we get fusion and we get quantum at the same time?
32:20 - Matthew Blakemore (Guest)
That's gonna be an interesting future, isn't it?
32:23 - David Brown (Host)
I mean that's.
32:24 - Matthew Blakemore (Guest)
I would say we don't necessarily need fusion, because I was reading this week that Iceland are experimenting with a new form of energy, potentially because they're drilling down into the crust, so into the actual magma, and they're gonna use magma as a potential energy source and they're doing some experiments to see if that's possible. Obviously, if that is possible, that could generate a lot more energy than even the geothermal plants they have already.
32:48 - David Brown (Host)
Because nothing will go wrong with that. Okay. I will say Haven't we seen science fiction films about stuff like that? What it's like, it'll all be fine until some weird bacteria comes up from down there and then wipes everyone out because we're not equipped to deal with that bacteria from a billion years ago or something.
33:12 - Matthew Blakemore (Guest)
Absolutely. I'm pretty certain. I've seen a show about that. What was it called?
33:17 - David Brown (Host)
Well, there's a few. Yeah, Well, there was one. There was Fortitude.
33:24 - Matthew Blakemore (Guest)
Fortitude, that was the one, yeah.
33:26 - David Brown (Host)
Which, yeah, I won't comment on the new show. That's nearly a copy of Fortitude, but oh really, that's my own personal vendetta that I have against that. But true detective, the new true detective, is essentially if you watch it it's almost seen for seen in the beginning a copy of Fortitude.
33:48 - Matthew Blakemore (Guest)
Oh really.
33:48 - David Brown (Host)
Yeah, and I'm just like, oh my God, really They've like They've Americanized Fortitude and made a copy of it. I haven't spoiler alert for anybody who's watched Fortitude, but I haven't really looked into it and watched enough of the episodes to see if they've started to split the story off and kind of take it its own direction. But both my wife and I sat and watched the first episode and were like, but the Hunter, like looking through the scope, like that's literally from Fortitude there's an exact same scene doing that. We were just like what is going on here?
34:22
So anyway, I'll get off my soapbox about the US copying international shows and pretending that they came up with the idea. So, thinking about the idea of AI being used in business as a business tool, where do you see and how do you think that's going to play out over the next few years? Do you have any ideas about how you think, like how business is really going to start to use AI to to become more efficient at some of those tasks that maybe Not that people don't want to do, but that are hugely time and labor-intensive to do? Do you have any thoughts on that?
35:03 - Matthew Blakemore (Guest)
Yeah, I mean, I guess my fear is that, to a lot of people, ai is just generative AI, and what I fear is that businesses will approach AI from the perspective of, oh, we need to do generative AI without actually analyzing what sits best with their business plan. You know what's what's what sits best with their business strategy? Because there's so many different forms of AI and AI has been around and used by the big companies, by big tech, for a long, long time. If you look at the Netflix recommender algorithm, so you look at, you know how Amazon are able to personalize your experience, all those sort of things. That's all AI.
35:49 - David Brown (Host)
Well, it's machine learning, but yeah, okay.
35:51 - Matthew Blakemore (Guest)
Yeah.
35:52 - David Brown (Host)
I'll give you AI on that one you give me AI. Collectively, we're all trying to sell it, so it's AI. For engineers, it's machine learning, but if we're trying to sell it on the commercial side, it's AI, so it's all good.
36:04 - Matthew Blakemore (Guest)
Yeah, yeah absolutely, but my point is that we're kind of I'm. What I'm worried about is that businesses are going to jump on the generative AI train when it may not necessarily be the best form of AI for them to be implementing, and I think they need to really think about what their business strategy is and how I can benefit them, because there's so many productivity gains and things you can get from Traditional machine learning in certain aspects. You know, I wouldn't say that the projects I did at the UK regulator were generative AI, but they had a big impact and so therefore, there's so many different types of artificial intelligence. It's important for businesses to really seriously consider what's going to offer them the most value and where should they be dedicating their time and resources, because my fear is, if they don't do that, a lot of businesses are going to spend loads of money and then they're going to be really unhappy because they don't see the returns they're expecting and that's going to result in negative press for AI, and we've already seen that.
37:00
Actually, we've seen, you know, some companies have implemented generative models without understanding bias and without understanding Lucination and all this sort of stuff, and then they've had problems and they blame AI and they say, oh, ai is terrible because it's done this. Well, no, it's the implementation of the artificial intelligence. You know? That's. That's the issue. I think they're going to be very happy about that.
37:22 - Voiceover (Announcement)
Yeah, I think so and I guess it's like any new technology, though, isn't it?
37:26 - David Brown (Host)
Everybody's just trying to figure out where it fits. You know, and, and trying to understand.
37:32
I was laughing because In all I could think in the background, as you're saying, you know they need to make sure that it fits into their business plan, and I'm thinking as long as they haven't used AI to write their business plan Because it looks good. But I think it's a good idea to do that Because it looks good, but nobody really. Because, again, we all know the value of the business plan. Right, it's doing the work. It's not that, it's not the document itself. The document itself is just a, it's just a fixed point, and then it's the next day, it's out of date, but it's the thinking that goes behind it that it forces you to do, which is all the important bit. And I know, I know there are people out there using AI to do things like create marketing plans.
38:18
Yeah and it's like that totally.
38:20
Yes, you can get a nice looking document out the end of it, and I can.
38:25
It's why I stopped using AI to do things like presentations and and outlines and all that sort of stuff, because it's great, it can create the content for you and you can go to a site and it'll create these amazingly designed slides and everything else.
38:41
But it's like trying to read someone else's slide and if, if you didn't write the slide yourself and you didn't go through the Pain of trying to think what do I put on this slide and what do I want to talk about, it makes it really difficult actually, yeah, at the end of the day, to engage with that content and then to give a meaningful, some sort of meaningful presentation. I mean, I've even got to the point now where I've stopped using slide decks as much like unless somebody really specifically wants one's, I Try to go without them and I just have some written notes on a card and you know, go back, kind of old-fashioned style. But but it's because it makes me actually think and really cut, you know, kind of talk about what it is instead of going oh okay, well, that's on the slide, so I should talk about that.
39:27 - Matthew Blakemore (Guest)
Yeah, I mean there is certainly a danger of over reliance on this new technology. I think even Sam Altman has pointed that out, actually, that there people shouldn't shouldn't get over reliant on the technology. I Mean it. What I think it's really good for is if you've got a lot of research to do and you source the research yourself in terms of, like, cutting it down into manageable chunks for you to digest, when, when forming a presentation or something, it's brilliant because you don't have to read pages and pages anymore. You can, you can make it a lot more concise and you can read through it and then go okay, actually, I want to include this point, this point, this point, this point, or even, if you ask it, okay, can you pick out all of the quotes from this document?
40:06 - David Brown (Host)
Yeah, so nice, yeah yeah.
40:10 - Matthew Blakemore (Guest)
But, yes, using it to write your presentations. Not necessarily a great idea, because I have seen people call out as well where it's made up a quote or something and they put the quote on there and it's it doesn't exist, or the person that it says made the quotes doesn't exist. So you do have to be careful, but but it is.
40:29
It is a good resource if used properly or used it how it's how it's intended to be used. I guess, and you know, what we have seen is people using it in a way that it's not. It's not suitable for at the moment.
40:44 - David Brown (Host)
Yeah, it's.
40:44
It's also interesting that you would say that, because I'm interested excuse me, I'm interested in your thoughts on a GI and how that might Work out, and let me let me give you a little bit of a prompt to think about. The way I think it's gonna happen is I think what we're gonna end up with is some sort of a federated model, where what we're gonna have is we're gonna have some sort of a top level, which will be your let's call it conversational AI, so it's what you will interact with, and you may go to it and say have got this problem, I've got this physics problem I need some help with, and it won't know the answer. What it will do, though, is it will know where the physics AI is, yeah, and it will interface to the physics AI for you, so you'll ask it a question, and it will go to the physics AI, and the physics AI will be a narrow AI, specifically trained on all physics questions and whatever yeah, and it will give the information that will then be passed through and back to you.
41:47
Yeah is that. I mean, that's the only way I can see it working, at least in the midterm. Now, at some point down the line, maybe when we have quantum, I know I I slightly joke about quantum, because it's like this thing off in the distance that everybody keeps talking about and it will Happen someday, probably, yeah, but until then that's the only way I can see it working. Do you? Do you think it'll work that way or do you think it's gonna come out and in some other way?
42:15 - Matthew Blakemore (Guest)
I Would agree with you that I think it's gonna be Multimodal, in the sense that'll be multiple models plugged into each other to come up with the, with the responses. If you think about how the human brain works, you know we have parts of the brain that are focused on creativity, we have parts of the brain that are focused on memory and all that sort of stuff. So I think yeah. If you think about AI or AGI in that sense, then yeah, it's, it's about developing the Multimodal ecosystem that can all plug into each other to provide the overall AGI experience. I guess the argument with AGI is is what? How do you define it? And there isn't currently an agreed definition of a G. Yeah, and the interesting thing is you could argue that Fish has general intelligence. Have we already surpassed what a goldfish Do with with an LLM? May? Maybe you could argue yes, because even the you know the basic visual recognition tools and things you know, probably, combined with the LLMs you know, may compare to some some animals in terms of what they're they're able to absorb. But if you're, if you're talking about actual human intelligence, then it's really tricky because LLMs can obviously take in a lot more information than a human possibly could and retain a lot more information than a human possibly could. I think I can't remember exactly the number of hundreds of thousands of years that the Meta guy said it would take a human to read all of the data, and he said 200,000 years or something, but either way, it's a huge, hugely Long period of time that it would take a human to try and absorb all of the data or read all of the data that has gone into these LLM models. So in that respect, you know it's it's we're at a period of time where we've built a model that can retain a lot more information than a human brain.
44:07
What we haven't got at the moment is models that are able to Understand the world visually in the same way that a human can, or taking the sounds that a human can, and interpret them in the same way, and so there's work being done on those models, and if you plug all of those models together, as you're saying, you could get to the point where eventually and there's going to need to be, as I say, big strides forward with GPU power, big strides forward on energy sources AI may be capable of understanding its environment and making judgments for itself from its environment, and that, I guess, is what a lot of people would consider to be AGI. But the interesting thing is that you know the open AI board get to decide what AGI is really so so they may have other opinions or other views of what what achieving AGI looks like, um, and so that's. That's the complication at the moment. There's no agreed definition and many people have different ideas of what, theoretically, it would look like.
45:08 - David Brown (Host)
We can't even agree on what a startup is so yeah, we've got.
45:12
We've, and that's. That's a. That's a private joke that you'll probably understand, but, um, yeah, I mean we've got no hope at agreeing what it is ultimately. Do you know what it does, though? And Like, if you just think about all of that that you just said for a second, you know it would take 200 000 years to read all the same materials and and get all the data and whatever. Like, who cares?
45:34
The thing is is that a human is still a million times better at most things than an AI is like. The AI is good as it is, is is has a very, very narrow skill even still, and and you know, most children Are better at processing data and their brains are better at processing data than any AI, no matter how powerful it is and how much data you throw at it, and it's like I. I really take umbridge with the concept where people say oh well, what if we're going to be in a world where a computer is going to be a Million times smarter than you? It's like. It's like there isn't a million times smarter. That's like a fake metric. It doesn't even exist, because there will always be things that humans can do that computers can't, and that's not a bad thing, I think, you know I.
46:29
I think the one advantage I think we'll see with AI potentially is that I would exp and I could be totally wrong, and this might be an interesting philosophical concept to talk about for a second. But in my mind, the advantage that AI has at the minute that, even though it has some bias in the data, it doesn't have any ulterior motive when it gives you an answer. So you can ask it a question and it will give you an answer, but there's no it's not trying it doesn't have any hidden objectives or any hidden goals or any hidden motivations other than trying to give you an answer that you like. And that is what's very different than humans. So a human can give you an answer, but you don't really know you never know why they're giving you that answer, and it could be correct or incorrect, and that could be on purpose or by accident.
47:24
Do you know, what I mean, and so you never really know. But with AI and with computers, at least it seems at the minute that there's no ulterior motive to it, and I think that's an interesting wrinkle.
47:39 - Matthew Blakemore (Guest)
It's interesting because I guess, with a lot of models, though, the AI itself is not making any bias judgments itself.
47:51
if it's been trained on data with bias, then take a conflict, for example, not going to pick out any in particular, but if there's a conflict going on but if there's one country with one view and they train an AI model, there's another country with another view and they train an AI model, then the models are going to come up with very different judgments about what's going on Correct, and so, though the AI models themselves are not coming up with a judgment because of feelings or anything like that that they have, it is reflecting back what the human data going into it has put in.
48:33
So, yeah, I mean, I think companies like OpenAI and Meta have worked quite hard to try and ensure balance within the responses that you get from their models. They haven't necessarily succeeded on all accounts, but that's certainly an aim of theirs to try and ensure that if you ask it a question, you get a balanced answer. And I know there's cases where it's not been balanced and they've been quite well documented, but that's certainly an aim of theirs, and they've got all these kind of moderation tools and things about responses that can be given. But, yeah, it is a very, very interesting thing that AI itself is not inherently at the moment making judgments. But if you build, as you've said, that sort of federated environment where it's getting a lot of data from different places, is it going to be able to actually form its judgments in a more human way. Really, you know, is it going to be able to form opinions, yeah.
49:36 - David Brown (Host)
But then I wonder if it becomes, I love balanced. That's amazing, that's an excellent way to say it, because one of the and I talked about this in this meeting that we had with the copyright stuff the other day, and I don't remember why, but I crammed it in. But when you start talking about bias like I did a whole podcast about this with my friend Mike and what we were talking about is do we want AI to be a reflection? Because you say that data is bias? But everybody's biased. We're all as humans, we all have our own biases, right and all you're seeing is that bias in a mirror and it's being reflected back to us in a way that's really making us think about ourselves and about how we are and about how we are as society and all that. And I think personally, I think it's important, I think it's more important to keep that mirror, because that keeps us in check and that keeps us honest and we can start to see are we making progress or are we creating some fictional reality that doesn't exist? Because if you start tinkering with the answers, then now you're presenting back a world that doesn't really exist. And so if somebody who I'm thinking of young kids now, and let's say, we start tinkering with answers, right, and they start. You know, they always give the example of. You know, if you put in entrepreneur, you always get pictures of white men, and maybe that's because 90% of entrepreneurs are white men, I don't know. But do you then want to say, well, we want that to be a balanced view. So we start tinkering with the results, so it shows, you know, different races and different genders, all this stuff, right, and we go, okay, we've, like, who's going to decide what percent of what? Like what percent are men and what percent are women, and what percent are this and that and that? And like that's a whole nightmare in itself, right, like how do you break out that in percentage wise to make it fair or balanced? But but then you start saying, but that's not reality.
51:46
So then when people, if they've been looking at that kind of fictional world, and then they go into the real world, guess what? That's not what the real world is like. And then there's some weird disconnect that people start to experience, and I think we're seeing a little bit of it now, with kind of Jen, what is it, zee? And those kids kind of going into the workplace, and you know there's been all this talk about you know how it's all got to be like this and everybody's got to have all these you know mental health breaks and all this other stuff. And they get into the actual world and businesses are savage and it's like a jungle out there and they don't know what to do. And there's all these examples of kids, you know they're like crying at their desk and it's like dude, it's just work, get on with it.
52:29
But it's, I think there's because there's that disconnect, and so that's what I worry about. But I like the word balanced because I think it gives a better. I think semantically it's better, I think it means probably the same thing, but semantically it makes me feel more comfortable. So I absolutely thank you for that, because I will 100% use balance to moving forward.
52:55 - Matthew Blakemore (Guest)
I mean it is very interesting, you know, because one of the other things that actually came out with the projects that I was working on with the UK regulator is the importance of understanding the difference between wanted and unwanted bias, depending on what your your model is being developed to achieve.
53:17
If your model, for example, taking a video analysis tool as an example if your model is designed to work the best on on Hollywood content, for example, and Hollywood content has particular biases, you know, if you remove bias entirely from your model, you might find that it performs less well at analyzing that content than if it has the biases built in and understands the biases, and so there is an ethical quandary, because you could say, well, some of the biases held in Hollywood are unethical, but then if your model is entirely ethical, it won't perform as well at analyzing the content, and so there's there's this quandary. You know, are you going to change the world with your model? Is your, is your objective to change everyone's opinions in line with your model, or is your objective to make your model reflect the reality, to get the best results from what it's supposed to be doing? And so that's really something that a lot of ethics committees and things within companies are going to have to be considering and finding a balance to, because it's because it is a really important debate to have, and and one of the things that frustrated me throughout the time I've been working on AI as well is AI often gets the blame for bias. It's so easy for companies to say the AI is bias. Oh, we've got rid of the AI products because it's bias.
54:49
trained a recruitment tool in:55:13 - David Brown (Host)
That's exactly my point. Yeah, it's that it reflected back the actual situation that was happening and they didn't like that.
55:20 - Matthew Blakemore (Guest)
Yeah, and so you know. The models can shine a light on what's happening already, but companies should already be aware of this stuff, Like they. Why? Why are they not putting their own workers under as much scrutiny as they are? The AI models is my point, because if you're just going to keep blaming AI models for bias, then you're not actually dealing with the inherent problem.
55:39 - David Brown (Host)
Exactly. Yeah, thank you. Very well said, that was exactly my, that was exactly my point, and I think it's more valuable, at least at this point, I think it's more valuable to look at the actual data itself and see what that's telling us. Yeah, then it is to try and finesse the results so that it shows you something that doesn't actually happen. Yeah, and I don't, you know, I don't know if they changed their algorithm so that more female like maybe more female candidates get scored higher in the recruitment process, does that mean that more female engineers are actually going to get hired? I don't know. That's a good question. Maybe, maybe not.
56:20 - Matthew Blakemore (Guest)
It depends on the interview.
56:24 - David Brown (Host)
Wow, okay, that was cool. I like it. Excellent, balanced, love it. I'm conscious of time, so we're getting sort of we're getting towards the hour mark here. One of the things I like to ask everybody is when you, when you work with AI, do you are you polite, do you say please and thank you and those sorts of niceties to it, or do you treat it like it's a tool and you just ask it for things and get the answers and then go on with your day?
56:55 - Matthew Blakemore (Guest)
That's a good question, and why? I would say that I certainly. When I started using chat GPT for the first time, I was very polite, but then, when GPT-4 came out, it obviously was a lot more advanced and it was responding with a lot more detail. And then, more recently, what I've noticed and what a lot of people have documented, is that GPT-4 has got worse, and what I suspect has happened and I can't possibly say for sure because I don't have access to OpenAI's internal goings-on it seems to coincide with them adding multimodal functionality, and I think what they might have done is reduce the quality of GPT-4 from a text generation perspective to ensure it's not pulling so much on their GPUs, when they've also incorporated Dali and other multimodal aspects into the tool.
57:51
And so what has been interesting is people have said, if you're aggressive with it, that it produces better answers. Have they got some sort of override where, if you're rude to it and you're dissatisfied with its responses, that it then goes back to the original GPT-4 or the better quality GPT-4. And I've actually seen for myself its responses improve if you are cross with it. And so the question really is yes in general, because they've kind of humanised these products that you're communicating with. Being polite is a natural human thing. But then if you're going to get better responses by being angry because you're dissatisfied with what it's originally given you because that's the way they've programmed it, then that's.
58:44 - David Brown (Host)
That's terrible. That's terrible Because the knock-on effect of that I mean if you think about people like those of us for who this technology is new, we could maybe work that out and it might not have too much of an impact. But if you think that you've got younger people and if they learn that the thing is to be mean and you get better responses, the total impact on society there could be enormous. Could you imagine? Oh, that's terrible. I hadn't seen any of that. Now I have something to do this afternoon. I'm going to go try and find some there's lots of articles on it.
59:24 - Matthew Blakemore (Guest)
Lots of people have said that they're genuinely seeing better responses when they get angry at it.
59:29 - David Brown (Host)
That's terrible.
59:33 - Matthew Blakemore (Guest)
Yeah. Wow Only open AI can tell you, if that is actually the case, as to how they've programmed it. But I don't know how else you would quantify how the responses are improved when people have been rude.
59:46 - David Brown (Host)
Well, I have noticed that they've certainly changed the style, like you used to be able to say and I picked on a marketing plan. But you used to be able to say write me a marketing plan for a podcast, and it would literally give you the headings, the text, the body, the paragraphs, like everything. Now, if you say, write me a marketing plan for a podcast, it gives you the headings and the outline, yeah. And it says you should think about what you might want to put in this thing instead of just writing the answer for you, yeah. And so I'd never really and no one's ever mentioned before the thought this has been fantastic. You've brought up loads of new stuff. This is great.
::But no one's ever kind of had that idea that maybe they scaled it back because they were using the compute power for something else, and that's a really, really good. That's a really good theory. But, yeah, slightly distressing that you have to be mean to it. Now I have to test that as well, but I like being nice to it because I think someday, when it takes over, it's always going to go. But that Dave guy was always nice to me. We're not going to kill him. We're going to kill the other guy over there because he was an asshole most of the time. But, anyway, cool, okay. And if you have when, when not yet, but when you have your AI personal assistant, what are you going to name it?
::Oh, to be honest, I've not really thought about that. What would I name it? I mean, the thing is I'm so used to saying Alexa now, but I probably just wrapped the name Alexa because otherwise. I'd keep calling it Alexa by accident.
::But then Alexa will get jealous. That's true, alexa, at that point might be switched off. If I've got a personal assistant, I don't need.
::Alexa anymore.
::Okay, okay, interesting, right. Thank you very much for your time today. That's been amazing. It's been loads of cool stuff has come out of the conversation. I've taken some notes as we went along, so a lot of the stuff that we talked about, like BSI and some of the other things, I've actually put into the show notes already, so people will have links to all that stuff. Is there anything in particular you'd like to promote? Do you have a new book out? You've got some appearances coming up or anything that you'd like to mention.
::So I'm speaking at several events coming up. I mean, I'm actually speaking at the, the Media and Entertainment Services Alliance about artificial intelligence and IP, so that's coming up in February. Speaking at the European Broadcasting Union as well. So it should be quite interesting to hear from all the broadcasters around Europe about what they're doing and what they're using in technology as well.
::When is that.
::So that's actually at the end of January, so that's the next week in Geneva.
::All right Okay.
::Awesome so. So yeah, those exciting stuff coming up. And also there's the Generative AI for Marketing Conference in London. I saw that.
::I can't afford to go. It's way too expensive. It's like two grand for a ticket.
::Oh goodness me. Yeah, I don't know if I'm allowed to bring a guest, but if I can I'll bring one.
::I mean I'd love to go to that, yeah, but I just I look at a lot of the stuff and unfortunately, podcasters in most circumstances aren't considered press, so a lot of times we can't even sort of blag a press ticket, although I did get one to the AI Summit in New York but I couldn't go in the end, which was really annoying, because you know, it's like one of those things where when you get an opportunity to have a pass and then you don't show up, you think you'll never get another pass. But I just, yeah, I couldn't work it out.
::But you know what? The team that organized the London AI Summit and the New York AI Summit are brilliant and I'm sure, I'm sure, they will provide one for you for London.
::Hopefully, I hope so, I hope so I might do an interview with them or something just to upset me. Just say hey look, I'll do a podcast for you. Brilliant Matthew. Thank you very much again for your time today. It's been brilliant and I will. Yeah, I'll follow up with you at some point in the future. I'm sure something major will happen and I want to get you back on the podcast to talk about what's going on, if that's all right.
::No, I'd love to, absolutely love to, brilliant.
::All right, thanks very much. Have a good afternoon.
::Thank you very much. Cheers Bye. Thank you. Thanks again for listening and stay curious.