Dr Sue Keay: What if a little failure is the key to AI success?
UNSW AI Institute's Dr Sue Keay says leaders must embrace artificial intelligence experimentation to gain a competitive edge and drive organisational progress
For business leaders navigating the rapid rise of artificial intelligence, the instinct to wait until everything feels certain can be paralysing. Many organisations are holding off on AI adoption, hoping for a moment when it will feel safe, predictable or perfectly regulated. But that moment never comes.
It’s a mindset that has put Australia on the back foot. According to KPMG’s Trust in AI report, Australians lead the world in apprehension about AI. The irony? Inaction is its own risk. Especially when competitors are building capabilities in the background.
At AGSM’s 2025 Professional Forum, keynote speaker Dr Sue Keay, Director of the UNSW AI Institute, challenged leaders to rethink their relationship with risk. “AI success,” she said, “starts with experiments – and a tolerance for failure.”
The case for early experimentation
According to Dr Keay, the future won’t be claimed by the cautious. It will belong to those willing to try, test and learn their way forward – even if that means getting it wrong the first time. “If you remain in a state of fear, you’ll focus overly on risks and regulations and you won’t be considering the opportunities these technologies can bring to your business,” she warned.
Dr Keay encouraged leaders to focus on building momentum with practical steps. Organisations that start small build what she calls their ‘AI muscle’, gaining the experience to scale more confidently as the technology evolves. “If you’re not prepared to experiment and find safe ways to explore AI,” she said, “you’re not going to build that muscle. And it will be that much harder when these technologies hit you at scale.”
And the benefits of early adoption, she noted, taper over time. As competitors experiment and refine their own approaches, the advantage of moving first begins to fade. For organisations serious about staying competitive, the time to make the leap is now.
Small pilots, lasting change
Robotics offers some of the most vivid examples of small AI experiments growing into industry-wide transformation.
Dr Keay described how gas companies, unable to fly inspectors to remote facilities during COVID, deployed robot dogs to capture imagery and assess safety. “We actually have robot dogs being used in gas facilities,” she said. “What began as a crisis workaround quickly became standard operating procedure.”
Subscribe to BusinessThink for the latest research, analysis and insights from UNSW Business School
She also shared the example of COTSbot, an autonomous robot developed to help protect the Great Barrier Reef. This underwater vehicle identifies and removes coral-eating crown-of-thorns starfish, a major threat to reef ecosystems.
These stories reinforce a simple truth: starting small doesn’t mean staying small – and the effort you invest early can unlock benefits far beyond what you imagined.
Leadership starts with mindset
So, how can leaders turn small experiments into lasting capability? It starts with mindset.
While AI may be powered by code, leading its adoption isn’t about being the most technical person in the room. It’s about setting the tone for curiosity, responsibility and a willingness to adapt. “You will have to make decisions about these technologies, whether you have a computer science degree or not,” Dr Keay said.
Leaders need to be brave enough to ask questions, experiment openly and guide their teams through uncertainty. Because if you’re hesitant and risk-averse, your people will be too. And like it or not, AI is already seeping into workplaces. Employees are bringing in tools on their own, using generative AI to draft documents or solve problems in ways you can’t always see or control.
AGSM Professional Forum 2025: Photo gallery
Without clear policies, education and oversight, those same tools can create unintended risks – from sharing sensitive data with public models to relying on flawed outputs. “If you haven’t invested time in teaching your teams what responsible AI use looks like,” Dr Keay said, “you can’t expect them to know where the line is.”
Just as cybersecurity has become a shared responsibility, AI literacy now needs to be part of every organisation’s culture. Training people to treat confidential data carefully, understand the limits of AI tools and report issues early can make all the difference.
Get the basics right
With the right mindset in place, Dr Keay was candid about where the real work begins: the unglamorous foundations.
To adopt AI responsibly, leaders must first understand the nuts and bolts of their own business. That means mapping out workflows in detail, cleaning up messy data sets and putting clear governance frameworks in place. “Successful AI doesn’t start with flashy pilots,” she said. “It starts with understanding exactly how your processes work today. And where smarter tools can genuinely add value.”
Done properly, this groundwork doesn’t just enable better AI adoption. It strengthens operations overall, exposing inefficiencies and clarifying where teams can improve.
Learn more: Beyond chatbots: Navigating AI's industrial transformation
Share your failures
Finally, Dr Keay urged leaders to be transparent about what doesn’t work. Too often, she said, organisations hide AI missteps out of embarrassment or fear. But sharing failures can actually build trust with employees, customers and stakeholders.
“We should seriously consider how we’re sharing information about when AI doesn’t work the way we expect,” she said.
The same way mandatory reporting has improved cybersecurity, normalising honest conversations about AI’s limits can help teams learn faster – and make smarter decisions the next time.
The courage to lead
In the end, Dr Keay’s message was simple. The organisations that will thrive are those willing to get started, even if the first steps feel imperfect.
Because every experiment builds capability, and every lesson (even the hard ones) brings progress. And every leader who models curiosity over fear sends a signal: we’re ready to shape the future, not wait for it.