Flush with $42M, hot AI startup Faculty plans to hoover-up more PhDs… and steer clear of politics

In the wake of the news that UK-based AI startup Faculty has raised $42.5 million in a growth funding round, I teased out more from CEO and co-founder Marc Warner on what his plans are for the company.

Faculty seems to have an uncanny knack of winning UK government contracts, after helping Boris Johnson win his Vote Leave campaign and thus become Prime Minister. It’s even helping sort out the mess that Brexit has subsequently made of the fishing industry, problems with the NHS, and telling global corporates like Red Bull and Virgin Media what to suggest to their customers. Meanwhile, it continues to hoover up Ph.D. graduates at a rate of knots to work on its AI platform.

But, speaking to me over a call, Warner said the company no longer has plans to enter the political sphere again: “Never again. It’s very controversial. I don’t want to make out that I think politics is unethical. Trying to make the world better, in whatever dimension you can, is a good thing … But from our perspective, it was, you know, ‘noisy,’ and our goal as an organization is, despite current appearances to the contrary, is not to spend tonnes of time talking about this stuff. We do believe this is an important technology that should be out there and should be in a broader set of hands than just the tech giants, who are already very good at it.”

On the investment, he said: “Fundamentally, the money is about doubling down on the UK first and then international expansion. Over the last seven years or so we have learned what it takes to do important AI, impactful AI, at scale. And we just don’t think that there’s actually much of it out there. Customers are rightly sometimes a bit skeptical, as there’s been hype around this stuff for years and years. We figured out a bunch of the real-world applications that go into making this work so that it actually delivers the value. And so, ultimately, the money is really just about being able to build out all of the pieces to do that incredibly well for our customers.”

He said Faculty would be staying firmly HQ’d in the UK to take advantage of the UK’s talent pool: “The UK is a wonderful place to do AI. It’s got brilliant universities, a very dynamic startup scene. It’s actually more diverse than San Francisco. There’s government, there’s finance, there are corporates, there’s less competition from the tech giants. There’s a bit more of a heterogeneous ecosystem. There’s no sense in which we’re thinking, ‘Right, that’s it, we’re up and out!’. We love working here, we want to make things better. We’ve put an enormous amount of effort into trying to help organizations like the government and the NHS, but also a bunch of UK corporates in trying to embrace this technology, so that’s still going to be a terrifically important part of our business.”

That said, Faculty plans to expand abroad: “We’re going to start looking further afield as well, and take all of the lessons we’ve learned to the US, and then later Europe.”

But does he think this funding round will help it get ahead of other potential rivals in the space? “We tend not to think too much in terms of rivals,” he says. “The next 20 years are going to be about building intelligence into the software that already exists. If you look at the global market cap of the software businesses out there, that’s enormous. If you start adding intelligence to that, the scale of the market is so large that it’s much more important to us that we can take this incredibly important technology and deploy it safely in ways that actually improve people’s lives. It could be making products cheaper or helping organizations make their services more efficient.”

If that’s the case then does Faculty have any kind of ethics panel overseeing its work? “We have an internal ethics panel. We have a set of principles and if we think a project might violate those principles, it gets referred to that ethics panel. It’s randomly selected from across faculty. So we’re quite careful about the projects that we work on and don’t. But to be honest, the vast majority of stuff that’s going on is very vanilla. They are just clearly ‘good for the world’ projects. The vast majority of our work is doing good work for corporate clients to help them make their businesses that bit more efficient.”

I pressed him to expand on this issue of ethics and the potential for bias. He says Faculty “builds safety in from a start. Oddly enough, the reason I first got interested in AI was reading Nick Bostrom’s work about superintelligence and the importance of AI safety. And so from the very, very first fellowship [Faculty AI researchers are called Fellows] all the way back in 2014, we’ve taught the fellows about AI safety. Over time, as soon as we were able, we started contributing to the research field. So, we’ve published papers in all of the biggest computer science conferences Neurips, ICM, ICLR, on the topic of AI safety. How to make algorithms fair, private, robust and explainable. So these are a set of problems that we care a great deal about. And, I think, are generally ‘underdone’ in the wider ecosystem. Ultimately, there shouldn’t be a separation between performance and safety. There is a bit of a tendency in other companies to say, ‘Well, you can either have performance, or you can have safety.’ But of course, we know that’s not true. The cars today are faster and safer than the Model T Ford. So it’s a sort of a false dichotomy. We’ve invested a bunch of effort in both those capabilities, so we obviously want to be able to create a wonderful performance for the task at hand, but also to ensure that the algorithms are fair, private, robust and explainable wherever required.”

That also means, he says, that AI might not always be the ‘bogeyman’ the phrase implies: “In some cases, it’s probably not a huge deal if you’re deciding whether to put a red jumper or a blue jumper at the top of your website. There are probably not huge ethical implications in that. But in other circumstances, of course, it’s critically important that the algorithms are safe and are known to be safe and are trusted by both the users and anyone else who encounters them. In a medical context, obviously, they need to be trusted by the doctors and the patients need to make sure they actually work. So we’re really at the forefront of deploying that stuff.”

Last year the Guardian reported that Faculty had won seven government contracts in 18 months. To what does he attribute this success? “Well, I mean, we lost an enormous number more! We are a tiny supplier to government. We do our best to do work that is valuable to them. We’ve worked for many many years with people at the home office,” he tells me.

“Without wanting to go into too much detail, that 18 months stretches over multiple Prime Ministers. I was appointed to the AI Council under Theresa May. Any sort of insinuations on this are just obviously nonsense. But, at least historically, most of our work was in the private sector and that continues to be critically important for us as an organization. Over the last year, we’ve tried to step up and do our bit wherever we could for the public sector. It’s facing such a big, difficult situation around COVID, and we’re very proud of the things we’ve managed to accomplish with the NHS and the impact that we had on the decisions that senior people were able to undertake.”

Returning to the issue of politics I asked him if he thought – in the wake of events such as Brexit and the election of Donald Trump, which were both affected by AI-driven political campaigning – AI is too dangerous to be applied to that arena? He laughed: “It’s a funny old funny question… It’s a really odd way to phrase a question. AI is just a technology. Fundamentally, AI is just maths.”

I asked him if he thought the application of AI in politics had had an outsized or undue influence, on the way that political parties have operated in the last few years: “I’m afraid that is beyond my knowledge,” he says. But does Faculty have regrets about working in the political sphere?

“I think we’re just focused on our work. It’s not that we have strong feelings, either way, it’s just that from our perspective, it’s much, much more interesting to be able to do the things that we care about, which is deploying AI in the real world. It’s a bit of a boring answer! But it is truly how we feel. It’s much more about doing the things we think are important, rather than judging what everyone else is doing.”

Lastly, we touched on the data science capabilities of the UK and what the new fund-raising will allow the company to do.

He said: “We started an education program. We have roughly 10% of the UK’s PhDs in physics, maths, engineering, applying to the program. Roughly 400 or so people have been through that program and we plan to expand that further so that more and more people get the opportunity to start a career in data science. And then inside Faculty specifically, we think we’ll be able to create 400 new jobs in areas like software engineering, data science, product management. These are very exciting new possibilities for people to really become part of the technology revolution. I think there’s going to be a wonderful like new energy in Faculty, and hopefully a positive small part in increasing the UK tech ecosystem.”

Warner comes across as sincere in his thoughts about the future of AI and is clearly enthusiastic about where Faculty can take the whole field next, both philosophically and practically. Will Faculty soon be challenging that other AI leviathan, DeepMind, for access to all those Ph.D.s? There’s no doubt it will.