Skip to main content

Trustworthy AI Bill of Materials

Standardizing AI system composition and trustworthiness claims to ensure transparency, accountability, and trust in AI.

TAIBOM is an emerging standard to describe and manage AI systems and AI system risk. TAIBOM addresses the full AI supply chain, from training data through the results that AI systems produce.

Learn More About TAIBOM

How TAIBOM works

Why TAIBOM?

To manage AI systems and their risk we need to know what version we are using, where it came from and how it's put together.

If we can't label, version and attest (be sure its what it says it is) the system we have no foundation for making higher level statements.. It's like building a house on sand. We need robust, cryptographically backed system descriptions before we can start.

TAIBOM addresses the technical challenge of providing formal descriptions of AI systems and dependencies. It additionally provides a method of describing and validating subjective claims about the qualities of these systems


Technical building blocks

TAIBOM is underpinned by the W3C verifiable credential interoperable standard, and provides:

  • A formal method of describing and attesting to the constituent parts of an AI system
  • A method of describing larger systems, and system dependencies within and across AI system
  • A method of annotating components and systems with security attributes positive and negative risk implication
  • A method of reasoning about the inherent risk in an AI system by transitively propagating risk
How TAIBOM works

Who can use use TAIBOM

TAIBOM is distributed by nature, to reflect the complex supply chain of AI systems

  • Data owners: owners of the data on which AI systems are trained can version and annotate their data with things like copyright licenses
  • AI software developers: can version the software, and annotate this software with licenses or vulnerabilities.
  • AI system trainers: can version the trained models and annotate these systems with think like: performance metrics, bias metrics, licences or vulnerabilities
  • Data publishers: can aggregate data for training, using or extending the meta data descriptors
  • AI system providers: can publish AI systems with strong version control and extended meta data descriptors and quality assurance measures

But the main users are the AI system end users. With TAIBOM the AI system user possess the technical tools to do detailed investigation and quires on the system descriptions and security qualities

How TAIBOM works

Benefits

πŸ”’

Security

The trustworthiness (or trustability) of the AI system improves through strong interoperable foundations.

πŸ”

Supply Chain Transparency

End users can see in detail where the system comes from, allowing them to make informed decisions on usage based on relative trust.

πŸ“œ

IP and Licensing

Transparency and respectability of licensing and IP terms across the value chain increases, protecting both end users, data owners, and system developers.


TAIBOM Community

TAIBOM has been seed funded from the UKRI Technology Missions Fund. It is now a standalone working group under Techworks (Deep Tech trade body).

Its founder members include NquiringMinds, Copper Horse, Techworks, BSI and BAE. It is now a fully open body accepting new members.

John Moor
John Moor
IoT Security Foundation
David Rogers
David Rogers
Copper Horse
Jeremy Bennett
Jeremy Bennett
Embecosm
Carsten Maple
Carsten Maple
University of Warwick
Nick Allott
Nick Allott
NquiringMinds
Sarb Sembhi
Sarb Sembhi
Virtually Informed
William Jones
William Jones
Embecosm
Andrew Martin
Andrew Martin
University of Oxford
Mo Ahddoud
Mo Ahddoud
Dmytro Nekrasov
Dmytro Nekrasov
Individual Member
Rajesh Nayak
Rajesh Nayak
Individual Member
Antoinette Hodes
Antoinette Hodes
Check Point
Gareth Richards
Gareth Richards
TechWorks
Frederic Taylor
Frederic Taylor
BAE
Matilda Rhode
Matilda Rhode
BSI Product Certification
Yunus Jabalpurwala
Yunus Jabalpurwala
Honda
Siraj Shaikh
Siraj Shaikh
Xingyu Zhao
Xingyu Zhao
University of Warwick
Lokesh Devaraj
Lokesh Devaraj
Paul Wooderson
Paul Wooderson
Muthu Ramachandran
Muthu Ramachandran
Forti5
Kerstin Eder
Kerstin Eder
University of Bristol
Paula Palade
Paula Palade
Jaguar Land Rover
Joy Tan
Joy Tan
BSI Standards Ltd
Ivan Serwano
Ivan Serwano
BSI Standards Ltd
Ashley Setter
Ashley Setter
NquiringMinds
Ionut Bostan
Ionut Bostan
NquiringMinds
James Richardson
James Richardson
NquiringMinds
John Manslow
John Manslow
NquiringMinds
Ben Curtis
Ben Curtis
NquiringMinds
Henry Pearson
Henry Pearson
NquiringMinds
Tony McCaigue
Tony McCaigue
NquiringMinds
Hannah Dalton
Hannah Dalton
NquiringMinds
Vadim Safronov
Vadim Safronov
University of Oxford
Dominic Taylor
Dominic Taylor
Consilium
Laura May Bishop
Laura May Bishop
BSI Standards Ltd
Max Greenwood
Max Greenwood
Department of Science, Innovation and Technology
Colin McKellar
Colin McKellar
Axiomise

Get Involved

Email: example@example.comΒ to get involved