The need to build trust in AI
By Justin Hurst (pictured), Chief Technology Officer APAC at Extreme Networks
AI’s seemingly limitless power and potential are two reasons it’s found growing acceptance in Australian enterprises and government. This new technology often inspires equal parts excitement and fear among adopters: it’s exciting because the possibilities seem endless, but scary for the same reason. What will ultimately make or break Artificial Intelligence (AI) – and encourage broader use cases – is the extent to which organisations can trust the technology and its outputs.
There are a lot of different ways to think about trust in AI.
Most often, people are concerned about AI’s security, its ability to preserve data privacy, whether its output is accurate, and whether it can safely operate within existing compliance requirements. At the same time, organisations need to ensure that when building out an AI strategy, they tackle specific use cases that can ultimately benefit their employees, partners, and customers.
These are all excellent starting points for establishing trust in AI, and the basis from which we’ve explored this challenge ourselves as part of a technology innovation space called Extreme Labs, which brings together diverse expertise to create prototypes that push technological limits and solve IT challenges that networking and security teams face. That has naturally led us down the path of AI, and as part of that work, we strive to develop trusted solutions to a standard that both we and our customers would expect.
Through that work, we’ve identified four trust-enhancing principles that are broadly applicable in any AI context and may assist organisations with their AI journeys.
The first principle is that trust in AI is about partnership.
For end-user organisations, leveraging existing relationships with trusted partners is a natural way to engage with emerging technologies such as AI.
When end-user organisations buy into a technology ecosystem, they’re buying access to tooling, first and foremost, but they’re also buying into a team that can help them navigate challenges and seize opportunities. Forward-thinking developers of these ecosystems take their role seriously in innovating on end users’ behalf, constantly aiming to introduce new technologies, capabilities, and features in practical, thoughtful, and secure ways so they can be safely applied to the end user’s challenges.
People are more likely to trust AI, which adds a practical dimension to an existing or familiar toolset.
While it can be interesting to see demonstrations of complicated what-if scenarios around AI, it’s often difficult to envision a practical role for these models in organisations. Even where there is a potential role, there may be questions about whether the AI can be trusted to perform in complex or business-critical circumstances. Through Extreme Labs, we’ve very intentionally focused on evolving AI use cases that almost all network teams would have a use for every single day for this reason.
The second principle of trust is integration.
It’s all very well for an AI model to produce recommendations, but the more tightly the AI is integrated with the adopting organisation, the better it’s contextually able to perform.
An AI model produces its most accurate and business-relevant results when it’s able to engage with an organisation’s own internal operational data. Most organisations have put in place rules that prevent them from feeding such data into publicly accessible AI models because they don’t know how the models work and what, if any, safeguards exist. Not knowing how or if their data can be re-used damages trust.
Private AI models, on the other hand, enable organisations to work with their own data. For example, in a networking context, there may be a need to understand all of the network anomalies occurring in a geographic area. A private AI instance can draw on network infrastructure telemetry and management data, collate that together, identify anomalous events based on defined parameters or dynamic thresholds, and determine how best to present the data. Because the model output is based on the organisation’s own data, this exponentially increases trust levels in the results, and the value that can be derived from the AI.
The third principle of trust comes down to the AI’s ability to consistently find good answers.
Today, teams and individuals spend a lot of time trying to find answers. There is no single person who can answer every question internally. Discussions with multiple teams are often required, with a consensus view formed from the different responses that provide an accurate response to the question being asked.
An attractive and practical use case for AI is the ability to search across all these internal data and knowledge silos to produce an answer in faster time, together with pointers to the source material to improve trust.
This can be seen in practice with Extreme AI Expert, which is currently in the technology preview phase and is being developed to look across our own vast knowledge base of hundreds of thousands of technical documents to quickly provide answers and actionable insights. Because the response to a question is verified against expert data generated by Extreme, the accuracy of the response is high. To further build trust, the ‘expert’ also exposes the information it used to generate the response, meaning extra validation is possible. Teams aren’t simply asked to accept the AI’s output; they can unpack how it reached the conclusion it did with some additional reading, if they so choose.
The fourth principle is looking at your functions enterprise-wide.
Take a deep dive into your organisation and review the simple, everyday, repetitive activities that can be accelerated and possibly replaced with AI, saving the business money and time and enabling precious human resources to be allocated to more value-adding output.
At the same time, this may well open up the opportunity to experiment with new ways of working and create potential use cases enabled by AI.
The more experimentation you undertake, the greater the chance of success. AI’s capabilities will only exponentially increase, ultimately leading to greater trust and generating more value for your team and the business overall.
As AI evolves and matures, trust will likely become more of a given. In the interim, by working with trusted partners, building functionality incrementally, and basing your initial use cases in existing problem sets, you can implement useful, value-additive AI in your organisation, while also developing organisational trust in its capabilities and limits. This paves the way for future growth and experimentation to unlock the full power of AI for your business.