Why trusted data is mission-critical to building ethical AI

Why trusted data is mission-critical to building ethical AI

By Vini Cardoso (pictured) – Field CTO at Cloudera ANZ

 

Without a doubt, Artificial Intelligence (AI), specifically Generative AI (GenAI), continues to be a game-changer. McKinsey’s report, Generative AI and the Future of Work in Australia, suggests that AI could boost labour productivity by 1.1% annually until 2030, automating half of all current workplace activities within the same decade. The growth in AI adoption shows no sign of slowing down, with Australia’s projected AI spending set to reach $6.4 billion by 2026 and 57% of APAC organisations now early-stage adopters.

However, amid these positive indications, concerns are being voiced by government and business leaders that underscore a deeper issue: trust, specifically, a lack of it. Low trust continues to impact adoption rates. Earlier this year, Science and Industry Minister Ed Husic acknowledged that while AI was forecast to support economic growth, low trust was the ‘handbrake’ that needed addressing. It’s encouraging to see organisations like the Artificial Intelligent Expert Group being established to work on issues like transparency, testing and accountability – essentially establishing the much-needed ‘guardrails’ to mitigate risk and ensure safe AI systems.

As technology leaders, we must consider how to balance the benefits of AI versus managing any possible challenges that may arise. These can include bias mitigation, data privacy, governance, and the integration of unstructured data with large language models (LLMs). This is crucial to ensure we protect ourselves, our customers and the wider community.

There is much to consider when thinking about embarking on your own AI journey. In some respects, this is where the conundrum begins – do you start with AI, or do you start with the data? 

Building an ethical AI system requires trusted data

Ethical AI is a term used to describe the necessary consideration and inclusion of core principles into AI platforms and processes, including accountability, transparency, and governance. Building AI systems that people trust requires organisations to have trusted information sources. With accurate, consistent, clean, bias-free, and reliable data as the foundation, an ethically designed enterprise AI system can consistently produce fair and unbiased results. You can then easily identify issues, close any gaps in logic, refine outputs, and assess if their innovations comply with regulations.

Here are some tips for organisations looking to develop solid ethical AI systems:

  • Focus on intent: An AI system trained on data has no context outside that data. Unless we define one, there is no moral compass or frame of reference for what is fair. Designers, therefore, need to explicitly and carefully construct a representation of the intent motivating the system’s design.
  • Consider model design: Organisations should remember that model designs can also be a source of bias apart from data. These should be regularly monitored for model drift – when a model starts to become inaccurate over time due to changes in data, potentially leading to unfair predictions and discrimination.
  • Ensure human oversight: While AI systems can reliably make good decisions when trained on high-quality data, they lack emotional intelligence and cannot handle exceptional circumstances. The most effective systems intelligently combine human judgment and AI.
  • Enforce security and compliance: Developing ethical AI systems centred on security and compliance will strengthen trust in the system and facilitate adoption across the enterprise – while ensuring adherence to local and regional regulations.
  • Harness modern data platforms: Leveraging advanced tools, like data platforms that support modern data architectures, can greatly boost organisations’ ability to manage and analyse data across the entire data and AI model lifecycle. The platform must have built-in security and governance controls that allow organisations to maintain transparency and control over AI-driven decisions as they deploy data analytics and AI at scale.

LLMs: deciphering the new rules of the game

Large Language Models (LLMs) are one use case of AI that continues to disrupt the digital landscape—and it illuminates the importance of using trusted data.

When it comes to the advantages of using LLMs, there are many, including the ability to perform a wide range of tasks, and develop AI solutions faster and at a lower cost. This is very important for those of us looking to achieve faster time to value. LLMs also enable users to handle large volumes of data and scale on demand, providing insights on trends and patterns not available through traditional data analysis methods.

However, there is a counterbalance to be wary of. For example, hallucinations and algorithmic bias can prove problematic when working with LLMs. It’s something to remain vigilant for because it can produce an outcome that cannot be trusted.

One possible solution is platform-building, meaning you start from the foundation – your infrastructure – and work your way up. While pre-trained foundation models offer huge capabilities, serving and scaling these models can present formidable challenges due to the diversity and volume of data. Platform-building rather than model creation also highlights how LLMs can, and increasingly need to, be integrated into enterprise processes to deliver maximum benefits.

Moving forward on your ethical AI journey

I often remind customers that establishing good data intent starts with putting yourself in your customer’s shoes. This means asking yourself:

“Would you be comfortable with your data being used in this way?”

If the answer is no, that’s a big red flag. If the answer is yes, then this is likely your starting point.

When technology leaders fully understand the intent of data use and design AI frameworks that factor in ethical principles and practices, they will be one step ahead of the competition, but most importantly, they will be doing the right thing.