Global report reveals ‘Composite AI’ will be central to the successful adoption of Artificial Intelligence in the Australian enterprise
Dynatrace, the leader in unified observability and security, today announced the findings of an independent global survey of 1,300 CTOs, CIOs, and other technology leaders in large organisations including those based in Australia. The research reveals Australian organisations are increasing their investments in AI across all areas of their business to improve productivity, automate tasks, reduce costs, and keep pace with the competition.
Alongside the clear advantages, however, there are challenges and risks that organisations need to manage, from ensuring the outputs of generative AI are trustworthy to support business-critical use cases, to maintaining compliance with internal policies and global regulations related to data security and privacy.
These findings underscore the need for a composite approach to AI, whereby organisations combine multiple types of AI—such as generative, predictive, and causal—and different data sources—such as observability, security, and business events. This approach enables more advanced reasoning and brings precision, context, and meaning to AI outputs.
Findings from the Australian research data include:
- 69% of technology leaders say AI has become mandatory to keep up with the dynamic nature of cloud environments.
- 82% of technology leaders say AI will be critical to security threat detection, investigation, and response.
- 86% of technology leaders expect AI to extend access to data analytics to nontechnical employees through natural language queries.
- 41% of organisations have already changed the job roles and skills they are recruiting for because of AI.
- 75% of technology leaders think AI will enable cloud cost efficiencies by supporting FinOps practices.
- 74% of technology leaders will increase investment in AI over the next 12 months to speed up development by automatically generating code.
“AI has become central to how organisations drive efficiency, improve productivity, and accelerate innovation,” said Bernd Greifeneder, Chief Technology Officer at Dynatrace. “The release of ChatGPT late last year triggered a significant generative AI hype cycle. Business, development, operations, and security leaders have set high expectations for generative AIs to help them deliver new services with less effort and at record speeds. However, as organisations endeavour to realize the expected value, it becomes evident that generative AI requires domain-specific tuning and integration with other technologies, including other types of AI. In addition, organisations must use AI securely and responsibly and monitor it closely to manage cost and user experience. This will help them provide accurate results, reduce expenses, and prevent employees from exposing sensitive data or creating vulnerabilities in their environments.”
Additional Australian findings include:
- 86% of technology leaders are concerned that AI could be used for non-approved uses as employees become more accustomed to using tools such as ChatGPT.
- 97% of technology leaders are concerned that using generative AI to create code could result in leakage and improper, or illegal use of intellectual property.
- 99% of technology leaders are concerned that generative AI could be susceptible to unintentional bias, error, and misinformation.
- 96% of technology leaders say generative AI would be more beneficial if it were enriched and prompted by other types of AI that can provide precise facts about current states and accurate predictions about the future.
“One of the most significant challenges organisations face with generative AI is achieving meaningful responses that users can trust to solve specific use cases and problems,” continued Greifeneder. “Especially for use cases that involve automation and depend on data context, taking a composite approach to AI is critical. For instance, automating software services, resolving security vulnerabilities, predicting maintenance needs, and analysing business data all need a composite AI approach. This approach should deliver the precision of causal AI, which determines the underlying causes and effects of systems’ behaviours, and predictive AI, which forecasts future events based on historical data.
“Predictive AI and causal AI not only provide essential context for responses produced by generative AI but can also prompt generative AI to ensure precise, non-probabilistic answers are embedded into its response. If organisations get their strategy right, combining these different types of AI with high-quality observability, security, and business events data can significantly boost the productivity of their development, operations, and security teams and deliver lasting business value.”