AI Code of Responsibility

Revision: 1.0, September 2023

Owner: Alastair Whiteley, Chief Data Officer

 

Part 1: Statement of Intent

This is the Code of AI Responsibility of Agency Delta Ltd, trading as Deltabase.  The company utilises artificial intelligence (AI) in its business activities and seeks to do so in responsible manner that is consistent with their core values.

The purpose of our Code of AI Responsibility is to:

  • Ensure that our use of AI is consistent with our company values and those of our customers.
  • Ensure that our use of AI enhances outcomes for our customers.
  • Mitigate the risks associated with the use of AI.
  • Ensure employees understand their responsibilities and requirements they need to adhere to in respect to the responsible use of AI.

Thanks for reading this and helping us to use AI as a force for good.

Alastair Whiteley

Chief Data Officer

 

Part 2:  Ownership & Responsibilities

  1. Overall accountability for the responsible use of AI:  Alastair Whiteley (Chief Data Officer)
  2. All employees shall:
    • Take ownership for ensuring that they adhere to these guidelines.
    • Report any concerns to an appropriate person (as detailed above).
    • Co-operate with supervisors and managers on matters related to these guidelines.
  3. Breaches of these guidelines:
    • Any breach of these guidelines should be raised immediately so that they may be discussed openly and resolved appropriately.

 

Part 3:  Code of AI Responsibility

At Deltabase, our code of AI responsibility is our commitment to use AI responsibly. Mitigating against misuse, unwanted biases, discrimination and helping us provide our customers with accurate and trusted intelligence and benchmarking.

Our Code covers three primary areas of AI responsibility.

  • The Practical application
  • Alignment and oversight
  • AI education and collaboration

 

  1. Practical application

 

  • Transparency
    We design our AI and wider systems to be transparent and explainable, creating and adapting them based on clear specifications, feedback, and ongoing learning.
  • Data Integrity
    We ensure the integrity and provenance of data used as inputs to our AI systems and outputs are scientifically tested and assured to maintain the accuracy of our outputs.
  • Privacy
    We respect the privacy rights of individuals and companies and will ensure that our AI systems in use are lawful, compliant and secure.

 

  1. Alignment and oversight

 

  • Value alignment
    We assess each AI use case to ensure alignment to our own and our customers values.
  • Supervision
    We continuously monitor and supervise our AI systems, maintaining expert human oversight to mitigate risks and unintended consequences.
  • Oversight
    We establish robust governance and oversight, clear roles, responsibilities, and accountability to support responsible AI practices.

 

  1. Education and collaboration

 

  • Education
    We provide education and promote AI understanding across our team, customers and the wider AI community empowering them to make informed decisions.
  • Collaboration
    We embrace collaboration within the AI community, academia, industry, and regulatory bodies sharing best practices, addressing challenges, and working together.
  • Innovation
    We foster innovation and advancement by keeping abreast of the latest ethical research, promoting a culture of transparency, collaboration, and innovation.