Does mindset matter more than skill when hiring? ft James Stanier

March 31, 2026
48 min
podcast
EP 5

What to expect?

What do fun, curiosity and human connection have to do with scaling technology teams successfully? As James Stanier, CTO, Veterinary at Nordhealth reveals in this episode of The Future of Human podcast - everything. Tune in as he and host, Naomi Trickey, explore why hiring for mindset matters more than experience, how to maintain trust and collaboration across remote teams and the critical strategies for keeping your organization human-centered as AI transforms the workplace. It turns out - if the question is how to lead teams that don't just build great products, but grow great people, the answer lies in making fun part of the workplace again.

Transcript

[00:00:01] James:  The whole kind of building of trust, getting to know people, having one-to-ones, having group meetings with leads, having FaceTime in real life occasionally as well, I don't think that disappears because, I mean, that's the essence of what work is, you know? You are doing high-impact things with people you have high rapport with, and if you can't build the rapport, then, you know, we're not just automatons. Peer review is just the vehicle for the interaction, and that's really important because otherwise, we're not really working with anybody then, just working with ourselves. 

 

[00:00:29] Naomi:  Welcome to The Future is Human. I'm Naomi Trickey. And in this podcast, I talk with leaders from tech and hospitality businesses about how both they and their people are navigating the edge between humans and technology in an increasingly automated environment. We break out of the mould of a standard business conversation to hear more informal perspectives and reflections, to understand what people are really feeling about the future of work.  

 

[00:00:56] Naomi: Welcome to The Future is Human. Today, we are joined by James Stanier, who is Chief Technology Officer for Veterinary at Nordhealth. Now, another disclaimer, I know James. We worked together many years ago now, and James is a really experienced engineering leader and author. Three books, right? 

 

[00:01:15] James:  Yeah. That's true. 

 

[00:01:16] Naomi:  And a speaker who's really passionate and has written a lot about, actually, continues to write a lot about building teams and technology that make a difference. So, from helping to grow Brandwatch from a small startup through its acquisition to leading key engineering pillars at Shopify, you've actually lived through every stage, really, of scaling a tech organisation, but always, always with a human-centred philosophy. And if you don't believe me, just go read the books or subscribe to James' Substack. But in today's conversation, what we're going to talk about is how to lead and scale engineering teams without losing that human connection, and that's something that James and I started talking about a long time ago. It feels like a continuation of that conversation. So, it's a real pleasure to have you in conversation today. Thank you for joining me. 

 

[00:02:01] James:  Likewise. Thank you. 

 

[00:02:02] Naomi:  So, as I said, you've gone from being a hands-on engineer. I remember we employed you. You were sort of at the end of or just about to submit your PhD, I think. 

 

[00:02:11] James:  That's very true. And, also, it was certainly an interesting reality check to then answer the phones to angry customers. That was, like, the best learning experience, actually. Like, I learned so much. I hated it at the time. It was like a proper ego-shattering moment to come into a startup and then just get people moaning at you on the phone when I thought that I'd be doing some kind of, like, very lofty algorithmic things. But, actually, you learn everything about, like, how to run a business by doing that stuff. 

 

[00:02:36] Naomi:  What would you say that there's one leadership lesson that's really stuck with you from those early days have been shouted out by customers? 

 

[00:02:43] James:  This might sound like a silly answer, but it's just the best people are the people that just do things. And when we were at Brandwatch, you know, it was such a small company, and we had so much to do that everyone just had to do everything. And again, that felt like a chaotic mess, and it felt almost like a bug in a way. Like, no, no, no. That we should have more defined roles and departments and people and processes and things. But, no. Actually, like, optimising for people that do things is, like, the best possible way to find great people and to get things done. And you notice how rare it is, I think, in many organisations today to find people who are just willing to be generalists, to talk to customers, to write codes, to look at sales funnels, to do everything, to just get things done. And I think, culturally, I mean, we can always go deeper into this. But, you know, as we've sort of looked back over the last, say, like, 15 years before the pandemic period through to the remote shift and the pandemic and then the massive stock market bubble that then popped after COVID and just the whole cultural shift in technology in general. I think we're back at the place now where the most valued thing in any individual is just people who do stuff. And that's the one thing I really learned at the startup is, like, being pretty good at everything is often better than being amazing with just one thing. 

 

[00:03:50] Naomi:  Yes. 100%. I mean, I think it changes over time as the organisation matures, right? And we could argue that there's a point at which specialism becomes much more critical, at least for certain roles. But I was listening to a podcast earlier, actually with Spotify's CHRO, and she was talking about they've created an e-team, and the e-team is the top 20 people in the organisation, and the e stands for execution. So, you are not alone in thinking that. So, you joined as an early engineer at Brandwatch, where the engineering team fit in one room with a little outpost in Stuttgart, I think, and you sort of helped it grow to its acquisition. What were some of the most human lessons you learned about scaling, not just systems, but also people along the way? 

 

[00:04:38] James:  I was thinking about this the other day, and I think the thing that was unique to Brandwatch was a constraint. And I think that constraint actually served us really well, which is that we were never gonna be able to pay top of market. We were never gonna be able to poach people from the largest companies. We were never going to be able to do the whole, like, OpenAI or Meta thing of giving people 7-figure sign-on bonuses to get the top, top, top, top talent in terms of the definition of talent for what they think is talent, right? But, actually, like, for Brandwatch, I think our amazing constraint was the fact that we really had to search hard to find the people who were, like, the up and coming super talented but hadn't quite got the experience yet people, and then invest in them, and then give them the ability to, you know, as I say in the previous answer, to just kind of do a bit of everything and to just get their hands dirty and to be generalists and solve problems. And I think maybe accidentally, we didn't necessarily, maybe think about it at the time, but, like, looking back, you know, that company was full of incredible people. Like, I would work with most of them again in a heartbeat, and it's because we were able to work without constraint to find people who maybe, on paper, didn't look like the top 1% of talent for a role, but, actually, they were incredible people. And everybody grew. Everyone got promoted. Everyone did maybe 2, 3, 4 times as much as they thought they ever would. So, when it came to scaling people, it was just having those kinds of right-mindset people with the right talent in the organisation, where you could just keep giving them things, and they would very happily go outside of their comfort zone. And in engineering, specifically, like, in the early days, we did try and hire a few people from larger companies, and they just failed at how the company works because they just expected to have a neat queue of tickets coming into their inbox that then they executed on. That's just not how it was. So, that's something I look for a lot now in, like, the people that we hire over at Nordhealth, is I don't really care how much experience you've got. Like, I wanna see that you just have the right mindset and you're willing to learn. Because generally speaking, the tooling is making everyone into even more of a generalist as time goes on in terms of AI and the things that we can do, especially with coding. So, the skills are kind of interchangeable, but, like, I think not everybody likes him, but there is an Elon Musk quote that says, like, you know, the skills don't really matter because it's like the mindset that is a thing that you can't change. You need, like, a brain transplant for that. So, I learned about hiring people for their potential and not necessarily going through exact skill set matches on CVs. I think that's the one thing I learned. 

 

[00:06:54] Naomi:  Yeah. I always say when I'm asked as a chief people officer, what are the lessons? What are the non-negotiables? And one of them is about building a strong hiring engine, because actually, great hiring compounds and terrible hiring cost you an awful lot of money and time. And they're obviously time is money, and so it's the same thing, right? So, yeah, building that strong engine, and then once you've got people, actually, really developing them. In terms of developing them, at Shopify, you led engineering pillars within a global organisation, which is really known for its strong remote culture. So, once you've got those people in, how do you keep them feeling connected and creative whilst working across different time zones and different screens? 

 

[00:07:39] James:  So, I'd say that, like, maybe 70% of what that involved was all of the common-sensical stuff. So, making sure that the teams are all roughly in the same time zone so that they're online at the same time of the day. Don't split teams. You've got half of them in Europe, half of them on the West Coast. It doesn't work. It's just insanely hard. You're setting yourself up to climb a mountain in there. And then, again, lots of very straightforward things, such as having healthy budgets so the teams can get together and meet up in person when they need to, either because they haven't in a while or maybe they're about to start a new project. They want to plan it or hack on it. But I think, like, a lot of what Shopify did very well, and I think this comes straight from Toby, was that he was very, very militant, for lack of a better word, about single sources of truth and information. So, pretty much everything at Shopify was transparently searchable and findable. So, you could, as an employee, go on to our internal system, which was called The Vault, which is kind of like an Internet, Wiki hybrid thing. And all the company communications were there. You could search for any project in any team in any part of the company. You could see all the updates of the project. You could see the code base. All the reviews of projects from the directors and VPs were all on those projects. You could read the decision logs for everything. So, they really put a lot of internal tooling and effort into making sure there’s just one place to find everything and then encouraging everyone to document everything with decision logs and asynchronous updates. And then I think the rest of it kind of took care of itself. Like, the whole, you know, doing remote well, I think, is not necessarily a solved problem because it's hard, but I think the fundamentals are the same across different companies. 

 

[00:09:13] Naomi:  Yes. Interesting. The thing that I've always struggled with, though, with asynchronous working culture and that source of truth you've got really well articulated, it's a tenet of an asynchronous working culture, is that single source of truth. What I've always really struggled with is how to help people find out what they don't know. How did you overcome that? 

 

[00:09:34] James:  I don't think there's a straight answer to that. I think one of the things that Shopify did very well, actually, was regular kind of all-hands, which were very well produced in terms of their kind of video quality. And I think there was literally one guy whose job evolved into just kind of doing these, like, comedy sketches before each all-hands. Like, the production value was so high. He must have just done kind of that every week and not really much else, but everybody tuned in. And I think they would have, like, a rotating cast at the all-hands of the different execs, and they would always be quite good at extracting, like, what's a theme of interest for every given week. They'd invite in customers to talk about their businesses. So, there was lots of kinds of heartbeats throughout the business, both sorts of, asynchronously and also with the all-hands. Synchronously, although some people watched it back as a recording, they would kind of just serve as almost like a, you know, like one of those crane machines, that you play at the fairground and kinda dip in and, like, pull the thing out that's that's interesting and personal. And also just lots and lots and lots of oversharing. And I think Shopify also did, and I can't take any credit for this. They had very clear mission values and principles of how you work, and they continually refer back to them over and over again, just oversharing those things and making sure that, like, when decisions were made, it always linked back to a principle. So, one of the principles which I always use going forward is, you know, building things that most people need most of the time. And then if you're dealing with a feature request that doesn't fit that definition, you probably shouldn't build it, or you should have a third party build it as an integration. And the leadership team there were very good at always drawing back to those principles on how you operate. And I think with time, by osmosis, you then find yourself thinking that way. So, you can kind of take the principles and the mission, and then people then start to almost, you think of it like almost like the principles and the mission, the values, and the way that the exec operate are a bit like a machine learning model. And then the idea is, like, how do you get that model in everyone's heads so that then you get truly distributed leadership? And they did a very, very good job at that and additionally, I found that they would work very hard on performing actions that would reinforce the principles. So, one of the principles was around, you know, minimum number of meetings. Now everyone says that, right? Don't have meetings, don't have meetings, don't have meetings. But there was literally a program that somebody wrote that would delete all of everybody's calendar in the company once a year. So, you would come back after Christmas, and all your meetings were deleted. And then you had to go back and put them in yourself if you wanted to keep them. So, there was all these little, like, chaos monkey things that went on in the background that really reinforced the principles in a practical way. We're like, oh, wow, like, yeah. Okay. All my meetings are hosed. Like, what meetings did I have again? I've actually forgotten. 

 

[00:12:01] Naomi:  And therefore, what's the value? 

 

[00:12:03] James:  Yeah. Exactly. So, they do a very good job of that. 

 

[00:12:05] Naomi:  And so implicit within your answer, I think, are some of the sort of inherent tensions that we find in any working culture. And one of the ones that we both know about is the relationship between technical and nontechnical people. And I know you pride yourself on articulating and bridging that gap. What's your approach? Can you talk me through creating a sort of shared understanding, not just in a remote culture, but in the working cultures that you've been in where the stakes are really high, right, and communication can make or break a project? 

 

[00:12:38] James:  I think maybe one of the best things that I stole from Shopify was their approach to clean escalations, and that's making escalation, like, not a dirty word. And, effectively, the org chart was used as the escalation path, and the idea was that if at any time there are either people on different teams or the same team who disagree with the way forward. And this would happen quite a lot at Shopify because it's a monolithic codebase. Everybody touches everything. So quite often, you can have two projects happening where, actually, they wanna take the code in a different direction. It will just kinda go, no, you're wrong. No. You're wrong. Then you just start a clean escalation, which just takes whoever the people are disagreeing, one level above on the org chart. And then typically, you know, if it's frontline engineers, it'd be their engineering managers. Can they come to an agreement on it? However, with the key caveat that if you start an escalation, you let go of the outcome. So, if I disagree with you on a thing and we are on different teams and we have some different managers above us, if we agree to escalate, we do so knowing that both of us could lose. And then we let that escalation go upwards, and then a channel is created on Slack. Everyone's brought in. The debate happens. Now those managers may also disagree, in which case it goes upwards again, and it keeps going upwards. And it can actually go all the way up to Toby. You remember Andy Polhill, who we used to work with at Brandwatch?  

 

[00:13:51] Naomi: Yeah. Yeah.  

 

[00:13:52] James: Yeah. So, he was at Shopify. He was one of my engineering managers, and we were working on gift cards. And there was some particular quirk with the way that we were gonna be storing a date value on gift cards because we were allowing them to be scheduled. So you could, like, buy somebody a gift card for Christmas, and it was scheduled, the email sent on Christmas day or whatever. It is one of those things. It's just like the way that the code base was. It kind of, like, broke the architectural principle of how a few things fit together by having a date. And this sounds like super low-level, and it was. And we were trying to say, okay, well, I think we should just break the architectural principle because we wanna ship this before Christmas. And, additionally, it's not that bad of a crime. And then there was one of the people in the other teams who owned a different part of the code base that I completely disagree. I will not let you manage that code. And then it went up to our managers. So, it went up to the VPs. We had Glenn, and we had Farhan in there, and they both disagreed. And then it ended up going all the way up to Toby as to where should we be putting this date field because it breaks an architectural principle. And he decided that we shouldn't break the architectural principle, and so be it. And we moved on. So, because that whole process was in place, you knew that there was a very, very safe way of getting things resolved that involved also letting go of the outcome. And it became a really good tool, assuming that the whole management chain is brought into it. And, also, I think it reinforces the people need to be in the details kind of thing because if these kinds of things kind of split all the way up, the managers need to really understand what's going on too. And I just thought the way that that was implemented was fantastic as a way of getting over these kinds of, like, locked heads, which can be 10 times worse when it's remote. 

 

[00:15:22] Naomi:  Yes. It's interesting as well, though, that there's a kind of, it requires a degree of potential vulnerability, potentially self awareness, like all those sorts of very human skills. Certainly, a lack of ego to operate effectively in that environment. And how do you ensure that things like speed and scale don't come at the cost of empathy? Because that sounds like, that process of escalation sounds to me like good friction, but it doesn't sound quick, necessarily. Although I imagine the cadence of decision-making in an organisation like that is pretty pacey. But how do you ensure that speed and scale don't come at the cost of this human side of things? 

 

[00:16:04] James:  I think it's relevant in my current role as well, is why those things worked is because the leadership of the company believed in effectively long termism. So, the Shopify thing that was always said, which was one of the principles, is that it's a 100-year company. So, everything is compared against that time frame, which is, look, like, if we are gonna do something stupid to set a deadline that we made ourselves in the context of a 100-year company, this just doesn't matter. So, we would do it the right way, and then we move on. And I think the same is true without our, and software that we make for veterinary practices. And also, it's like, people stay on the same software for 20 years. They don't switch very often. So, rushing things to often satiate a deadline you've set yourself is not really relevant in the long term. And I think you have to keep pulling everything back to the long term of, look, okay, if you promise a client something, fine. You go fast, and you make sure you meet your commitments. Ideally, you get yourself into a situation in the first place where you don't have loads of those commitments. You set the deadlines. You set the pace. But I think having that long-term view and thinking of long-term companies, lasting companies, doing the right thing now so that the person in 15 years doesn't try and find you on LinkedIn and hunt you down for the bad decision that you made is a good thing. And I think what I quite liked in that and in that kind of long-term thinking is, you know, so much of the noise in the technology industry in general is around startups, it's around growth and speed and exits and this and that. And I think there's a lot that has been lost in thinking, this could be my last company. No one thinks like that anymore. That's a strength. And I think it can act as a really good antidote to all of that kind of thinking and worry if you think, well, we could all be doing this for another 25 years. So, in that time frame, how do we think about this decision if we know that we have to think that long term? 

 

[00:17:53] Naomi:  That's so interesting. I think you're talking about something that is deeply human, right? You're talking about human time rather than machine time, I think. And with human time comes a degree of perspective that in a sort of accelerating automated world, we're encouraged to just kind of forget about, deal with somehow, which is very difficult to do, I think, and we're naturally attuned to operate at a different pace. 

 

[00:18:21] James:  Maybe this is kind of mildly controversial, but my observation is that I always remember when we worked together at Brandwatch, and I think it was 10 years ago now, like, sort of 2014, 2015. As soon as the mission became we are going to IPO, I think the quality of the decisions that were made, and the road map, and just the general where are we going and how we're doing it suffered. Because then the horizon disappeared as to we could do this forever or we'll take our time, we'll build the best product, you know, that's the thing that matters the most. Because I think as soon as you put any kind of constraints around, you know, we have to effectively blow this thing up in two years or whatever for whatever reason, give investors a return, or to do this or do that, then kind of the heart drops out of it. I feel, at least, like, there's more of a, from what I listen to on different interviews and podcasts, there's more of a outside of the AI bubble. There's a ‘don't sell your company’, there's a ‘don't take loads of investment’, like, don't go as fast as you can. If you build a company or if you work at a company that fundamentally is like an extension of you, and it fulfils you, and it allows you to feel like you're contributing and getting better at your craft, why would you optimise for that thing ending? 

 

[00:19:34] Naomi:  Yeah. Interesting. And so yeah, because so, the horizon is a point of orientation, isn't it, essentially? And that's why it's important to maintain that horizon in view. I love what you're saying about keeping the heart of a company. Do you see AI impacting our ability? Because AI is, you know, of all the S curves, it's the most S. What's your experience there? What's your thinking? 

 

[00:20:00] James:  I use it every day, all the time. I am aware that, from a business perspective, the kind of business operator will be looking at it as a throughput increaser, maybe primarily. So, I think maybe the job as a leader is to go, okay, yes, things will have an increased throughput. But, also, can we use these things in such a way that it doesn't erode our core skills of being human? Sort of a really nice mental model that I've forgotten a speaker's name, but when I was a lead dev, Berlin a few weeks back, it was the sort of assessment of, if I'm gonna do a thing, is this thing pure grunt work? If so, just, hey, I can do the thing. It doesn't matter. I'm not gonna learn anything by doing that. So, take these eight different CSV files and merge them into one and organise them a certain way. If I spend three hours doing that, I'm not gonna come out a better person. So, full delegation there. But if there's something that maybe is a kind of an intuitive thing or a strategic thing or something that could represent a skill I want to hone in the same way that when you leave school, you forget, like, 90% of the stuff you learn. Now I'm gonna make sure I have a first attempt myself and then enter a conversation with AI in order to try and make that thing better. I mean, one nice thing about the industry that I serve now is that no one in the veterinary industry is overstaffed. So, everything that we're building with AI is not gonna put anyone out of jobs. Like, you go and visit a veterinary practice. The receptionist can't even answer the phone. They can't keep up with the admin. Vets spend hours every day typing into computers all their notes and everything. So, what's quite nice, at least where I'm sitting, is I know that what we are building in terms of automation tooling, we're pitching it in a ‘you get an hour a day back’, ‘you can go home’, ‘you can see your family’, you can have dinner together’. So that's, you know, the privilege of the position I'm currently in. But I am sort of very aware that there is a huge amount of worry at the moment that what's the endgame here? I know we've kind of slightly veered off the original question, but, like, depending on people's viewpoints, either we just have a great productivity tool now, and it would just get better, or on the other end of the spectrum, the entire world of work and the economy is about to blow up. And what comes next? Are we all gonna become tradespeople or builders or something? I'd have a physical job, I don't know, because all the digital jobs would just be gone. I don't know. I don't take a strong stance on that. I sort of see where things come. But, like, certainly, my own use as a person is I try to use AI in such a way that I keep the core skills honed that I want to keep going forward. And in terms of what we build, fortunately, we're actually alleviating stress and overwork with them at the moment, which is good. 

 

[00:22:18] Naomi:  Yeah. So, that makes sense. But AI does change the baseline, doesn't it? And I think I worry a bit about how people are going to learn skills that historically, they've been able to learn on the job, and that's no longer going to be possible. Yeah. One option is not to use people, but another option, like there must be other options there, and I think there’s a real opportunity for us to think differently about the development of people and differently about the opportunity for work for them to do if it's not that sort of so-called entry-level work, for want of a better phrase? 

 

[00:22:53] James:  The worry, of course, is that the, you know, what was the entry-level work was the training, and then you remove the training. And I think this is something with software engineering that is the tricky part, which is, okay, we're all going full on AI coding because it automates a lot of the things that we know how to do, but they just take a lot of time. But we had to do that before, and that's how we learn how to code and how to build things. So, what's the strategy for new entrants into the industry who want to be software engineers, given that we don't even know what software engineer is gonna be in 5, 10 years? Are we the last generation that writes code? I have no idea. But we'll see. But I did quite like what I was reading about OpenAI, how they basically do a pairing thing. So, every intern that they bring in pairs with a senior engineer there, and they just work together all the time. I mean, they do have the benefit of being in-person, so it's kind of the easiest to have that constant contact. But I think it does mean that organisations do need to have a very clear training path for new people because, I mean, we have to do it for the greater good. If every company doesn't do that, like, what do computer science graduates do now? And in every industry as well, like, there's a real worry that in 30 or 40 years' time, we would have outsourced all the skills to agents and AI, and nobody really knows how to do anything anymore. 

 

[00:24:04] Naomi:  Yeah. And becomes very sort of jelly. 

 

[00:24:07] James:  Yeah. And this is where I then think about this, and I start to get a little bit confused myself because these tools are being created by humans, and humans should care about humans. But is the problem that the tools are being created by companies who are optimising for money? Therefore, this is going very fast in a particular direction. But then if it didn't do that, they wouldn't be improving at such a rate. It all becomes extremely paradoxical. And, you know, I think this all goes back to the beginning of, you know, OpenAI was originally founded as a nonprofit because of, I think, a lot of these reasons, but it's clearly not one of those. And what does this mean for the future? I'm not sure. But, certainly, we can only do what we can do in a given day with the tools and the people that we have. And I think with, you know, less senior staff, it is just making people really aware of the kinds of things that you should let AI do for you and the things that you should not. And at least do the first draft yourself. Or even if you have AI do the first draft of a coding change, you then refine it as the human, and you make sure you understand what you're doing. You don't commit things until you really understand what it's doing. And I think that's the only sort of, the only option we have at the moment. Because if you think of the global maximum of if everyone's using these tools, the expected throughput of a software engineer will just keep increasing. And, also, I think code inherently has no value because in the same way that I would maybe pay some more money for a handmade mug or something, I'm not gonna pay more money for a software product that's, like, hand-coded. That doesn't make any sense whatsoever. So, given that there's no inherent value in being artisanal, I guess, we just need to make sure that we're focused on keeping people's skills. 

 

[00:25:37] Naomi:  Yes. I 100% agree, obviously. I think that you talk about one of the key differentiators between humans and AI for me is judgment, taste, discernment, and I think it's increasingly important. Has AI changed the way you lead?  

 

[00:25:54] James:  Yeah.  

 

[00:25:55] Naomi: How? 

 

[00:25:56] James:  It maybe hasn't in terms of the output that people see. So, I still do all the same things that I used to, writing, talking to people, you know, all the leadership things. But I do feel like I've got a sidekick in AI now. So, I do interrogate a lot of my own decisions in private. I do a lot of additional research that I wouldn't be able to do before. You know, I often use it as a thinking partner, pretty much all day now. I sort of split my screen into thirds, and the left-hand third is always mostly Claude to just keep sort of sense-checking myself. And I sort of wrote a few things on my newsletter a few months back about sort of just some prompts. And in Claude Code, actually, you can define them as agents, which is quite cool. You can almost set up, like, little thinking councils. So, you can define agents for different roles, and I think you commented on this. You sent an email when I posted it. You can go, okay, well, imagine I want for, like, a technical council. Like, it's got, like, a security engineer and a QA engineer and a principal engineer and all these people. And, yes, I've got them on my team, and, yes, I still work with these people. But what if I could run every single thing that I'm thinking past this council? And you can define the council, and you can just go, hey, I'm thinking about doing this thing. I wanna build this, or maybe I want to achieve this. What do you think? And then those agents can kind of go off and give you some ideas. And then often, back to your question, like, how does it change the way that I lead? I think I find that anything that I start to sort of move outwards into broader communication has already had a much better interrogation pass than I would have pre-AI, I think. 

 

[00:27:23] Naomi:  That's interesting. Yeah. I think I told you that you needed a chief people officer on your council. 

 

[00:27:28] James:  Yes. And that highlighted the fact that we don't currently have one at Nordhealth, which is why the council agents didn't have one. 

 

[00:27:34] Naomi:  Let's talk a bit more about your writing because I know, so I said you've written three books. You're fairly prolific. You also publish regularly on Substack. When we were talking a couple of weeks ago, we started to discuss the role that AI plays in your writing and in writing in general. Explain to me. What role does AI play in your writing? 

 

[00:27:56] James:  In terms of the writing itself, it doesn't. But in terms of the bookends of the process, it does. So, I use it heavily for research, especially if I'm gonna write about something. I want to hear that the AI can go out and check: has anyone written anything about this before on the Internet? Where has it occurred before? Am I at risk of duplicating anyone's effort here, or are there any opinions that I don't know about that I should know about? Like, these are the research phases. I lean very heavily into AI, and it's incredibly useful. Additionally, as I'm sort of kicking around ideas, I use AI a lot in two ways. One of them, so I'm not getting paid by them, but I use Wispr Flow, the speech-to-text thing, all day, every day. So, I don't really type anymore at work. Because I work from home, so I can make a noise. So, I dictate everything because Wispr Flow is amazing. He just pretty much one-shots everything with the right punctuation. So, often when I'm thinking of ideas, I will just speech-to-text dump everything I'm thinking about on that thing in a very unstructured, messy way into a transcript. And then I'll use AI to refine the transcript into maybe some themes and some bullet points of, like, the main narrative that I was stumbling around. And then I'll use that as my sort of research, and then I'll write the thing, which I find really, really helps me organise my thinking. But I write the thing myself, like, I dictate it, or I type it. And then after it's been written, I do also use AI for what do you think of this? Given my audience is this, how will it go down? Am I missing anything? Are there any blind spots? Also, it's very good at grammar and spelling as well, you know? Find spelling mistakes, find awkward grammar, run long sentences, repeated words, it would do a pretty good pass on that as well, but I don't have it output it, and then I copy and paste it. I will just say, give me the list of changes, and then it will highlight sort of line 130, change this word to this word, and then I'll go and edit it manually. So, I keep the middle pillar, which is the actual piece of writing. It's something I do manually, but I do get assistance on either end. And I don't know, like, what it is about, and maybe the models will get better with time, but you can just tell when something has been generated. I don't know whether it's, like, taste is, like, a derivation of consciousness or something, where you can just tell when, like, a real human has written something or created something or drawn something. But I still think that's very important that I, and especially I, feel uncomfortable, especially when people are paying for the newsletter or paying for a book, that I want to feel like I've done the work. 

 

[00:30:10] Naomi:  Yeah. Yeah. No. It's interesting. I've been thinking about, like, so I just finished a memoir. I'm not a big fan of memoirs, but this is a super interesting one. And one of the things I like about it is that it is imperfect. It's about photography. It's a photographer. And that in itself has gone through a whole journey of automation and digitisation and so on. But, actually, she's in her 60s now, and so she still takes, you know, proper old school photos. And I just think there is a kind of, there is an aesthetic, there is a quality, it's like the vinyl versus digital music, right? There is something there that is almost tangible that I would miss. And in her writing, it's also imperfect and scrappy, and I just really enjoy reading it. I don't wanna read gloss. Like, I'm not interested in gloss. 

 

[00:30:57] James:  It's true. Like, I think what's interesting about AI and machination and automation in general is that I think it bifurcates markets. So, things that are created by humans, actually, I think, increase in value. So, think of fast fashion versus a handmade piece of clothing. Think of the screenprint you buy at Dunnell versus an oil painting. Like, those things maintain value because, for the right people, they are worth something. And I think the same is true with writing, with photography. Unfortunately, not with coding. So, that's kind of the whole, like, problem with the industry that's gonna be faced, and then I sort of see it as our seasonal code. But, you know, talking about, like, you know, Rebecca as a designer. 

 

[00:31:35] Naomi:  Becca's your partner? 

 

[00:31:37] James:  Rebecca's my partner. Yeah. Sorry for the, I keep forgetting the recording here, but she's creative director and designer, and that industry has completely been blown to shreds, not necessarily because the output has changed that much in terms of what people can produce, but it's just no one's spending any money on on big in house design teams or commissioning big pieces apart from maybe at the sort of 5% artisanal top end. But, you know, for those people who are creative, art has value, physical objects have value, and I think we just have to adapt. And I don't know what software engineering will be in the future. Like, will it be so trivial to build software that it doesn't really have any inherent value anymore, and therefore, you can't really have high prices or subscriptions? I have no idea. So, I know that within my lifetime, you know, I might have to pivot into something different. Who knows? Let's try and see that as an opportunity to learn or do something different, I guess. 

 

[00:32:29] Naomi:  So, let's, if I may continue a bit on that sort of personal vein, because when we talked in preparation for this, we talked about your 18-months-old daughter. And I know you've been thinking really hard about how to parent in this increasingly automated world, and I'm obviously coming out of it the other end with my 18-year-old daughter. I'd love you to take me through some of your reflections here, you know, what scares you? What are you excited about? And maybe explain for people how you navigate both and parent in a way that is true to you as an engineer and technologist, as in someone who is passionate about this stuff and has built an entire life around it, but also as someone who cares deeply about staying human? How do you walk that edge? 

 

[00:33:13] James:  I wish I knew. I mean, I think what I've learned about parenting is that it's very intuitive. Before the image of my daughter came along, I read all the books about it, and I read all the websites, and then it just all goes out the window when you're you're dealing with the real thing in the moment. But, I mean, I think, certainly, some technology things that we are thinking about are that we're going to books rather than television, and we have very free access to lots of children's books around the house. So, she just picks things up and looks through things, and we can read. We don't really put the TV on that much anyway. That was kind of us, and regard we sort of read anyway, so that wasn't too hard. We don't use our phones around her where we can. We don't have a tablet in the house. And I think what's the the root of that? Maybe I mean, one, is the root of you know, at least with the book, I know what's in it, and it's educational or at least fun. Giving a child a tablet, you know, that's just unlocking the entire Internet, and that is very scary to me. But also, I'm very aware that we are all affected by the dopamine cycle of technology, and I don't want to feel responsible for introducing that too young. And it's interesting, isn't it? Because just I think everyone's a hypocrite because I spent my teenage years playing video games on the Internet and spending all night on the computer and all that kind of thing. And now I'm thinking, oh, as a parent, would I want that to happen? I'm like, no, no, I don't. So, really, I think what I'm focusing on is can she learn everything in such a way that I believe is stimulating in the right way and keep away things that I maybe believe could be harmful in the long term? And can I sort of move her curiosity towards things that maybe with the experience looking back that I can maybe wish I'd done a bit more as a kid? So, I'm thinking now, like, you know, musical instruments. I never played anything growing up. Like, I didn't teach myself something until I was sort of in my late teens, early 20s, and I wish I could have done that earlier. So, you know, thinking about what are the things that I didn't have access to that I'd love for her to have access to, not to force, but just to allow some self-selection if needed. But I certainly, I do worry a bit about technology as a distraction and the dopamine cycle that I do want to keep away from her for as long as possible. 

 

[00:35:16] Naomi:  And I think without wanting to be sort of too trite or overstate the metaphor, I think there is also a relationship between how we are navigating this edge in our personal lives as well as how we're navigating it in our professional lives and helping our teams navigate it. And I'm really interested in the leadership habits that you, what's new, what's different about what you do as a leader now that is powerful in this new world that maybe you didn't do, even three years ago? 

 

[00:35:49] James:  I have an example from literally this morning. So, you know, AI is amazing for coding. So, we're just about to start our 2026 planning, where, you know, every single team submits all the stuff they'd like to work on and how many people it might need or for discipline and so on. And, usually, that becomes a gigantic, horrible admin task of spreadsheets and just chaos. So, this morning, I pretty much vibecoded from scratch a web app that allows you to enter all the projects, stack rank them by priority by dragging them around, and then it has all the engineers and all the leads allocated down the side, and then it can, like, auto solve the allocation based on the priority of the projects and how many people you need for what and then tell you where things start to not be able to be solved by their home team and they need, like, people to move across teams. And as sort of a leadership habit, like, not only is that great that I can automate something that was really hard, I shared it with everybody as well and say, hey, like, I've just built this thing. And then I think they like that in the I'm not just some random, like, exec manager person. It's like, I'm actually, like, building some stuff, and getting my hands dirty, which I think it works both ways. I find it really fun, so it's good. 

 

[00:36:53] Naomi:  Yeah. I think that ability to go deep and wide, the kind of T-shaped leadership model, really is something that I really subscribe to, actually. And when we're hiring people at Mews, particularly at leadership level, that's what I look for because I think it's something that teams trust. They want to know that you can be in the trenches with them, but they also want to know that you've got a bigger picture. And like we were talking about earlier, you have that view on the horizon, and you can kind of steady the ship at the same time. If you could introduce a new, seemingly impossible culture into every workplace in this new world, what would it look like? 

 

[00:37:27] James:  I maybe got a couple of answers to that. So, one, could everybody in every discipline be trained on the tools to write code? The reason is that if you can have everybody able to build little tools for themselves, even just for themselves, Everyone does stuff that is boring, administrative, or annoying workflow automation stuff. Like, everyone has to deal with spreadsheets. Everyone deals with awkward data dumps. And if everybody could have, like, a baseline level of skill that they could open Claude Code and just build some internal tools that could do stuff for them, that would be amazing. And I'd say that was one thing that Shopify did do pretty well, actually, was that once they rolled out Cursor for everybody, the coding IDE, actually, usage shot up in sales and marketing and other places as they just started to try and build little tools that they could automate whatever they were doing every single day. So, that was fantastic. And then on a sort of, like, a joke note, when I was at lead dev Berlin the other week, I was talking to a chap called Bruce who's an engineering director at Netflix. And for the last, I think, like, 10 years, every single piece of 360 feedback he's ever gotten from a performance review, he posts publicly online on his GitHub. So, you can literally go to his GitHub, and all the good and the bad peer feedback is anonymised and on there, which I think is just hilarious, keeps you accountable. 

 

[00:38:38] Naomi:  It does keep you accountable. In both of your examples, there is an element of something that is deeply human, which is play, which I think is really, really interesting. And I think I'm increasingly of a mind that play is a way in which we will differentiate ourselves from computers because we have that. We like that element of surprise that is implicit within play. I think that's really interesting. And this is your first CTO role, right? Am I correct in saying that? 

 

[00:39:06] James:  True. 

 

[00:39:07] Naomi:  But you're obviously a very sort of tenured engineer now. What's a lesson that you have learned that you wish every new CTO learned early on? 

 

[00:39:20] James:  I would say I mean, you mentioned it before, which is T-shaped leadership and being in the details. Like, you gain the biggest trust and respect by being able to just drop down into any team and be able to have a conversation about the code. And the same is true with any function, you know, it doesn't have to be about code, it can be whatever any of your teams is working on to be able to sort of slot into a stand-up or slot into a team's channel and just work together on a thing and just show that your position in the org chart doesn't really mean anything other than you just have more stuff going on that you're accountable for. And also that that is not an anti-person. I think I mentioned at the beginning of our chat that maybe during the mega scaling era of, like, the late 2010s through to the sort of 2021 era of all these companies expanding huge, huge, huge, huge, huge with a million layers of management and a million teams where everyone became very, very siloed. And it was like, oh, no, no, you don't want the CTO to get involved in this decision. We don't want the CTO to be looking at the code. You don't wanna do that because they're the CTO. It's like, no, no, actually, like, that's what you want them to do, right? Because you have some experience. That's why you're doing the role. You have an opinion. That's why you're doing the role. So, making it a good thing for people to feel that I'm getting close to their project because I'm really interested in it, or it's really impactful, or I wanna make sure it's a success, you know, it's not a bug. So, I say just keep the details is really, really important. And the nice thing now is that, you know, with AI, you can stay closer to coding if you want to. You can, if you want to pair program with people occasionally in your organisation and just keep close to what's going on because you just don't know otherwise. It's impossible to know what's happening. 

 

[00:40:47] Naomi:  Yeah. It's interesting. And I think it's very human, right, that feeling of you're not doing it to catch people out. You're doing it to support them and develop them and get them to be better. What is one thing as a technology leader that you would never automate? 

 

[00:41:03] James:  The whole kind of building of trust, getting to know people, having one-to-ones, having group meetings with leads, having FaceTime in real life occasionally as well. I don't think that disappears because, I mean, that's the kind of essence of what work is, you know? You are doing high-impact things with people you have high rapport with. And if you can't build the rapport, then, you know, we're not just automatons. So, as much as the introvert in me would love to just never have to do one-to-one ever again, it is very important. And I think I was watching an interview with Michael Ovets, the Hollywood agent, on David Samara channel the other day, and he was very, very vocal about kind of integrity and relationships. And, I mean, he was an agent, so that's kind of the whole business. But it's very, very true, and I think that was one of the things that I did find a struggle with Shopify, just it was such a big company. And even in the sort of big company turns, it's not as big as the biggest companies, but it's still kind of like a 10 to 20,000-person company. There is no opportunity to really ever get to know most of the people that you're working with because it's so fast and fleeting. And I think that was one thing that did become a struggle over time, which was that you never really got to know anybody, and it was impossible to meet all the people that you collaborated with. And sometimes you'd never even speak to them on Slack. You would just collaborate on a code review or something, and it's like, who are you? I don't know. And in a digital part, isolating over time, I think. 

 

[00:42:26] Naomi:  It's interesting to hear a CTO talk about the value of humanness in a code review. One would almost think that they were diametrically opposed, but what I'm hearing from you is actually is the human layer adds value to your understanding of the code. Maybe that's about intention. Maybe that's about the quality of the work. Maybe that's something else. But that's what I'm hearing from you, that the human layer adds value. Is that a fair reading? 

 

[00:42:57] James:  I think that's true. Because, I mean, if you think of a code review, like, a code review is just a process in the same way that, like, it's not the same as this. But if you are a student and you do some work and then you submit it for grading, and then the teacher grades it, the grading is the process. But the learning is where you sit down with the teacher afterwards, you know, okay, well, why did I get a B+ instead of an A? And, like, okay, well, this question, you should have approached it in a different way because that's the key bit. And code review is the same. It's like, yes, there is this kind of automated process where there is a web app and a UI, and you write your comments, and you approve, and you don't approve. But, actually, what's better is if you're gonna review something and maybe you think, oh, I'm not really sure what they're doing here is the right thing, maybe just have a chat with them, walk it through together, you know, maybe just talk, and the review process is just the vehicle for the interaction. And I think that's really important because, otherwise, we're not really working with anybody then, just working with ourselves. 

 

[00:43:46] Naomi:  Yeah. I love that phrase, the vehicle for interaction. I think technology can be a vehicle for interaction. We just need to remember that and keep it in its place, right? So, if you could give any engineering leader one piece of advice about building truly human-centred tech teams, what would it be? 

 

[00:44:03] James:  Let's say, like, especially with technology and referring to what you said about play, I think a lot of people who ended up as software engineers got there through the roots of fun or play. You know, they were teenagers building a website, or they were messing about trying to hack one of their games to give them infinite money cheats or something, you know, that's how a lot of people got into it. And I think trying to make sure that there's, like, enough play and fun in what you do and the culture that you build so that when you come to work, you do as much as you can have some fun and learn something is very, very important. And, you know, I found that with AI recently, it's just actually allowed us to have a whole lot of fun. So, we can automate a lot of the things that we hate doing. We can come up with really interesting tools that we can use locally and with each other to just do things that we never thought were possible. So, like, fun has actually come back into the industry, I think, for the short term, at least. So, yeah, I would say encourage fun and what makes people curious, and all of that just comes through hiring great people, not holding on to people who are horrible. As you know, it's all about good personalities, good collaboration and a good mindset. And, you know, having a culture that sets just enough of a challenge that people feel they're really striving to achieve something, and they feel accomplished when they get there. But the way that you get there should be fun. And if you're not having fun, then maybe think about somewhere else. 

 

[00:45:25] Naomi:  Yeah. I couldn't agree more, really. We have a constitution at Mews, and my favourite article, I think, is never underestimate the importance of humour, because it just makes everything go more smoothly. So, the last question from me, you've seen the tech world from many angles, and we've covered a bunch of them: founder, leader, author, advisor, parent. What keeps you motivated when it comes to this industry? 

 

[00:45:47] James:  I mean, that's the really kind of personal question, isn't it? Because I think, depending on your view of AI, we could be going in some very bad directions in terms of the industry and also the people within it. But, I mean, I think what keeps me motivated, I mean, from, like, my own, like, local maxima, like, the role I'm in, I feel like I'm providing a service to something that means something. So, we are helping vets do their job, which is a good thing for everyone. And then when it comes to wider developments, I always like to think that civilisation humanity has gone through many, many cycles of growth and decline and growth and decline, and humans always find a way of coming out of whatever challenge that we're facing with something that is good. Think of before computers were computers, you know, there were actual human beings called computers who used to sit at desks and do maths all day, and no one would wish that upon anybody now. So, maybe if it is the case that all of our jobs are going to change or even disappear, and I hope that isn't the case, but I reckon the new professions, new jobs will spring up, and there'll be a renewal cycle over again, you know? And also more broadly inside the whole AI thing, yes, it's generating slop. Yes, it's generating videos that they didn't get posted on TikTok, and people waste all their time browsing these AI-generated videos. But I look, and I think about all the medical problems that will be solved, all of the breakthroughs in care, and all the math problems that will get solved, all the physics problems will get solved, like that will happen as well. And I think there's the good and the bad to everything, and you know, everything is a life and death cycle, and it just renews again and again and again. I think I'm not sure where we are in that cycle at the moment, but I'm always optimistic that we find a way in the future. 

 

[00:47:34] Naomi: No. I love that. Yeah. There are series of trade-offs, right? But, actually, I think that's a really lovely place to end, that there is a sense of hope, not just because of the way in which AI is changing our jobs, but because of what the computing power, the problems it will enable us to solve, that we haven't yet managed to solve as, sort of, humanity. Wow. Thank you so much, James. I really appreciate such a thoughtful set of answers and such a wide-ranging set of answers. Thank you so much. 

 

[00:48:03] James:  No worries. Thanks, Naomi. It was a really good chat. 

 

[00:48:07] Outro:  The Future is Human is brought to you by Mews, the cloud-based hospitality platform. If you want to learn more about what we do, visit mews.com. And if you'd like to listen to more conversations like this one, find us on Apple Podcasts, Spotify, YouTube, or wherever you listen. Subscribe so you don't miss future episodes. Thanks for listening. 


Loved what you heard? More to come.

We’re working on exclusive content for Future is Human subscribers. Sign up to stay in the loop.

Follow and subscribe to Matt Talks Hospitality