You’ve announced your vision, set your strategy, and now you actually want to implement AI and make a change to the way in which your company works. Time to find your team. But how do you do this? If you’ve never put anything in place before, how do you know who are the correct people to join such a team? How do you pick a leader, and how do you know that they are leading the team in the correct direction, with the correct boundaries?

Assessing a leader
There are so many people who now have AI on their CVs. It seems like every person and their dog should be capable of running a team focused on AI, and telling them apart can be difficult. If you liken the task to finding a leader for any new technology:
Do they understand the old way?
Do they understand the new way?
Do they understand your underlying business?
You will need a blend of good answers to these three questions. With respect to a leader in AI, you might refer to the “old” way as classic programming (e.g. Java, C++ or Python, with relevant problem solving skills). The underlying business is of course relevant specifically to your company. When it comes to understanding the “new” way, i.e. AI, as well as all of the usual areas where a good leader will shine, it is worth delving into a few specific areas around AI leadership:
Anyone who is serious about AI should be serious about data. Look for a candidate who not only understands how important data is, but also an understanding of how to make data available for AI modelling, an appreciation for governance, and a plan if these things aren’t in place
A good candidate will understand how AI works. A great candidate will understand that just because you can use it, doesn’t mean you should. AI is typically an extra dimension which adds insights above those which well-organised data will give you
Look into whether the candidate understands the reason why AI is used, not just simply parroting usage or implementation of specific AI (especially LLMs!)

Team
You should be able to rely on your leader to build out a team which implements and delivers AI. However, you should also be able to understand the bounds under which different teams can produce different outcomes. A team which doesn’t have any developers in it will not be developing products, or getting any programs to production. A team which doesn’t have data scientists will likely not be analysing data with the correct techniques, or be aware of the latest tools available to do so. A team which doesn’t have a good leader will likely waste time and effort heading down unnecessary rabbit holes
We’ve already dealt with finding a leader in the previous section, so understanding the general make-up of a successful team could be useful:
Developers - if you want to create usable products, in a production environment, strong developers should make up 70%-80% of your team. Bear in mind that typically your strongest developer will also typically deliver 70%-80% of your codebase too, so finding a strong lead developer is critical
Data Scientists - you don’t need the person who has deep experience in one particular technology, you need experienced modellers who are adept at changing modelling methods as new technology comes out.
Support/DevOps - in all likelihood, your new product will need constant monitoring, especially when it comes to ensuring the data you need for any analysis is accurate and timely.
The north star goal of your team should be releasing and supporting products in the production environment. As mentioned in previous articles, Proof of concept is an interesting and useful milestone on the road to success, but ultimately recognition of success should happen almost solely at the production stage.
Culture
Even with the right mix of talent, culture decides how your AI team behaves when the pressure is on. It shapes what people do when shipping faster might clash with shipping safely.
You’ll want a culture that blends scientific curiosity with a sense of responsibility. Not just “can we build this?” but “should we?”, and “what happens if we’re wrong?”. That requires people to feel they can raise concerns about data, models, or risks. And incentives matter too. If your reward structures celebrate speed above all else, you’ll get speed and the blind spots that come with it. If you recognise the work that makes models trustworthy, traceable, and transparent, the team will naturally build those habits into their product building.
We have noticed tension between research ambition, ethical guardrails, and commercial goals. But if the culture is clear and reinforced through how you set goals and reward decisions the team will know how to navigate that tension without needing to review the rulebook for every situation.
Whilst your team might comprise the right talent, instilling the correct culture is a must. There will be elements of ethics, business and science within the culture you should strive to create. Is it better to enforce these things, or to incentivise them? Whilst it might excite some to strive for the latest research, would it be better for the company to strive towards ethical or business goals? Look to set the culture as part of your vision and strategy. Weave the incentives within your company’s reward structures.

Team Level Boundaries
Immediate boundary setting should be delegated to the appropriate people in the team you have set up. Oversight of the entire setup should be available to anyone at any level, and transparency is key. Whilst the senior stakeholders might not need to know all of the details, it’s useful to have an overarching understanding of what underpins a successful team. You should ensure:
Boundaries are set on who can touch which environment (as is standard development practice)
Recognition of success is weighted almost entirely to what has actually been released to production
The leader of the “AI Team” are able to hire as they see fit. From a broad brush stroke, their team probably needs at least 60% developers, with the remaining 40% comprised of 20% data scientists or ML specialists, 20% dev ops/support
Specialists of a more specific nature (e.g. ethics) get brought in when the product shelf demands it
Firm Level Boundaries
Larger firms will usually have independent groups covering areas like:
AI Model Governance, including responsible AI practices
Data Governance to ensure that data is collected and used inline with both the firms wishes and inline with regulation and laws
Legal & Compliance
Ensuring Ethical use of AI
Ensuring Lawful use of software, AI and data
Higher level boundaries should be set by independent bodies within a firm when it is large enough. Smaller firms may need people to wear multiple hats at first, but the principles remain the same, with clear oversight, no conflicts of interest and Independence. The boundaries should reflect the difference between low risk and high risk products, so as not to drown every project in the same level of process.
Someone needs to be accountable for sign-off before anything is deployed in order to make the responsibility visible. Once something goes live, governance doesn’t end. It shifts to monitoring and knowing when a model should be pulled back or retired. A good governance setup encourages honesty, with the goal to make sure the right things scale and the wrong things don’t.
It is important to stay within stated boundaries, however it is also very important that ensuring you stay within these bounds doesn’t consume more effort than the actual work itself. A pragmatic set of solutions, which engage minimal time in the team’s workflow, is ideal.

In conclusion
Stakeholders don’t need every detail, but to build trust they need clarity on how the team works and where the guardrails are.
A strong AI team is measured not just by what it ships, but by how safely it performs once everything is live. With the right culture, boundaries, and oversight, teams can move fast without losing control and when “good” is defined upfront. Safety, fairness, resilience, and functionality; the team can build with confidence and deliver trustworthy AI.
Whilst stakeholders don’t need to know every detail of an AI team’s setup, it’s useful to have transparency of the composition and goals, and a good idea of what “good” looks like. Overlay that with a set of practical overarching goals and boundaries, and you can have a team set for success.
