So, you want to build AI for professionals
Artificial intelligence (AI) is the technologie du jour. It’s a tool that people might use in their day-to-day work. But if you’re putting AI in tools for professionals, it might not be as easy as to pull off as you think.
Being a professional is different to having a job. A professional is part of a profession, a line of work that has an ethical dimension. Think of doctors, lawyers, architects. Professions involve a commitment to a key human good which helps others thrive and prosper.
Professionals often have an impact on people’s lives through their work, which brings in the ethical dimension. Professions tend to normalise their ethics through codes of conduct or professional standards. Think of the Hippocratic Oath, the Solicitors’ Code Of Conduct or RIBA’s Code of Professional Conduct for architects. These codes tell professionals how to act.
A profession might be regulated through law and a regulation body, but there might also be professional bodies that set codes for their members. The Royal Town Planning Institute (RTPI) is a professional body for town planners and it has a code of professional conduct too.
There’s lots of buzz around using AI in planning. It can help with modelling housing growth, for example, which can aid decisions made around developing new houses, amenities and infrastructure. It’s thought that AI might make planning processes more efficient. In an indirect way, this could lead to more housebuilding and, eventually, more homes for people to live in.
But transparency is key. Even when you’re planning a new town on paper, not computers, the RTPI (a chartered institute) expects its members to show their working and how they made decisions.
‘Members must base their professional advice on relevant, reliable and supportable evidence and present the results of data and analysis clearly and without improper manipulation.’
RTPI Code of Professional Conduct, accessed 14 March 2025
This requirement for RTPI members means that any work given to a large language model (LLM) or AI agent must also present its data and methodology. It likely means that explainability is a necessary feature of any tool using AI, designed to be use by planners, so that those planners can comply with the code of conduct for their profession.
This isn’t the only risk to design with and for. Trust in AI heavily depends on the user’s perception of an AI system’s ability. By bringing the input data, the decision-making process, and by giving a professional the choice to run a different model, you can make whatever you build more trustworthy. Design patterns exist for showing confidence, changing the model, and sharing the information which shaped the output.
Additionally, there may be downstream processes – regulatory or not – which need to be complied with. Continuing with planning as an example, the Planning Inspectorate has guidance on use of AI as part of any appeal, application or examination being dealt with by them.
In my view, these are not blockers but constraints. Design is all about embracing constraints and thriving in the challenge of compromise. And these probably aren’t the only constraints out there – but one should discover the constraints before using AI in a domain with an ethical dimension.
If you can agree on the constraints, you might create a productive realism within them.
