Insights and seminars
From time to time, the Digital Transformation Agency publishes insights and seminars to support government agencies adopt AI technologies.
Implementing AI
With government's unique accountability, financial and cultural constraints in mind, the discussion will help APS leaders and practitioners find the shared vocabulary and opportunities to innovate with AI technologies effectively.
During a session titled Implementing AI, Kate Pounder (Board member, Amplify and former CEO, Technology Council of Australia), Doug Gray (Director of Data Science, Walmart Global Tech) and Dr Evan Shellshear (Adjunct Professor, QUT) discuss what makes innovative data science projects successful, with real-world examples from one of the world's leading tech innovation labs.
This discussion was recorded on 16 October 2024.
-
The video opens with a title slide. This slide shows the Australian Government Digital Transformation Agency logo as well as a logo that says 'artificial intelligence in government'. The main title reads 'Implementing AI' and the subtitle reads 'A conversation for leaders and practitioners'. This slide is present in the background through the presentation, located behind the three panellists. There is also information on the slide for accessing live, interactive content that no longer works.
00:00:00:00 - 00:00:39:06
Unknown
Good morning, everybody. We're delighted to have you today. Thank you for joining us. On behalf of the Digital Transformation Agency, we're glad to have so many of you joining us from all across the country and from so many different jurisdictions of government. But before we get started today, a word of thanks. None of this would have been possible without the help of the Queensland University of Technology,00:00:39:08 - 00:01:02:15
Unknown
the organising team at the DTA, and the generous and ongoing support of the Department of Employment and Workplace Relations, whose studio we're broadcasting today on Ngunnawal country. We're so excited for this conversation today and we want to get straight into it. So I'll soon hand over to our wonderful moderator, Kate Pounder, the former CEO of the Tech Council of Australia and a friend of the DTA.00:01:03:00 - 00:01:16:05
Unknown
But before we kick off, I'd like to invite Ngunnawal Elder Aunty Serena Williams to deliver the Welcome to Country. Thanks.00:01:16:06 - 00:01:47:06
Aunty Serena Williams
[Aunty Serena Williams speaks] Hi. Good morning, everyone. I'd also like to acknowledge the Digital Transformation Agency for respecting Ngunnawal protocols and engaging with our protocols in respecting those. Before I start, I'd like to acknowledge the moderator, Kate Pounder, and the panellists Dr Evan Shellshear and our dear friend from Texas, Doug Gray. Implementing AI – a conversation for leaders and practitioners.00:01:47:07 - 00:02:26:15
Serena
Government must be exemplar leaders when it uses AI. These conversations are vital to having decision makers, policy leads and digital practitioners make grounded, responsible choices. In particular, data-powered AI, the data government holds and how it uses that data can change lives for better or worse. This is particularly true for Aboriginal and Torres Strait Islander people, where there are new frameworks for governance of Indigenous AI and ultimately relies on the people like you making thoughtful, informed choices at the start.00:02:27:00 - 00:02:51:10
Serena
And thank you for doing so. I'm here to do a Welcome to Country, and I'd like to acknowledge everyone that is online with us today too. And, a Welcome to Country is acknowledging one's business when on country. And I'd like to acknowledge all my elders past, present and future and any other Aboriginal and Torres Strait Islander people online.00:02:51:11 - 00:03:15:07
Serena
And I say in my revitalised language, I say yamalundi yanamarra ngunnawal dhawura. Hello, come. I will sweep the lands for you to leave your footprints here on beautiful Ngunnawal country. Ngadjan, the water of the Orroral, the Murrumbidgee, the Molonglo and the Gudgenby that will cleanse you of all harm. And mulleun – mulleun is the wedge-tailed eagle, the totem of the Ngunnawal people,00:03:15:07 - 00:03:34:08
Serena
and she will guide, protect and oversee you here on your journey. On behalf of myself, other Ngunnawal elders and Ngunnawal family groups, I welcome you here to the beautiful land of the Ngunnawal. I hope you have some great conversation today. I know technology has come a long way. We are the oldest scientists in the world.00:03:34:09 - 00:04:00:09
Unknown
But, artificial intelligence ... it's a bit scary, but I think, it needs to be, you know, in the way, moving forward. Congratulations, and I'm sure everyone will be looking forward to these discussions. Thank you. [Kate speaks] Thank you, Serena for that beautiful and gracious Welcome to Country. And I'm so glad to join this conversation today with two fantastic guests.00:04:00:10 - 00:04:20:15
Kate Pounder
Our first is Doug Gray, who, as Serena said, is all the way over in Australia from Texas. And Doug is the director of data science at Walmart Global Tech, which is the tech and research arm for Walmart. And we also have a local data science expert, Dr Evan Shellshear, who is an adjunct professor at the Queensland University of Technology.00:04:21:00 - 00:04:52:07
Kate
Now, both are eminent leaders in data science, analytics and AI. Just last month they published their book 'Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics Without the Hype'. Now, I'll return to them in a moment, but I wanted to call out right at the start of this webinar that this is a fantastic opportunity for everyone listening and watching to post your own questions and have your chance to pick the brains of the fantastic experts that the DTA have assembled today.00:04:52:08 - 00:05:19:14
Kate
So if you do want to get those questions in, and we'll have around 30 minutes in the back half of the webinar for them, please follow the instructions on screen or via the link in the email that you would have received this morning. In their book, Doug and Evan explore why most analytic data science and AI intelligence projects fail, and they forensically examine real-world case studies to find that there's a couple of lessons that are pretty common across these failures.00:05:19:15 - 00:05:40:09
Kate
The first is that many projects are vastly more complex than people assume, and the second is that the analytical maturity of an organisation is the single biggest indicator that predicts whether these projects will be successful. Now, that's probably spread fear in the hearts of everyone listening, because I'm sure, you know, everyone would start thinking, well, what is our maturity?00:05:40:09 - 00:06:14:15
Kate
And, you know, they have a sinking feeling that maybe this is harder than it is. But the good news is Evan and Doug are not here to scare you. They're here to help you. And, that's important because we know that the opportunity to implement AI in the public sector is a huge one, but also one that comes with an incredibly important responsibility to do it well and get it right, both so that the outcomes of those projects get realised and it can help with better public-sector decision-making, but also so that those projects are implemented in a safe, effective and ethical way.00:06:15:00 - 00:06:42:04
Kate
So our hope is that, with the government's unique context in mind, today's discussion will help APS leaders and practitioners find the shared vocabulary and the opportunities to innovate with AI technologies in a robust and responsible way. So enough from me. I'm going to throw briefly to both Evan and Doug now and just ask them to give a little snapshot of their career and what brought them to AI and how that has informed the way they think about this work.00:06:42:05 - 00:07:08:10
Doug
Excellent. I might start with Doug. [Doug speaks] Thank you so much, Kate. Again, my name is Doug Gray. I have been in technology data-science analytics for about 30 years. After degrees in mathematics started my career with American Airlines, one of the world's greatest analytics and ops research organisations. I was very fortunate, privileged to join there and learn a lot of great lessons through success and failure about how to economically impact the airline industry.00:07:08:12 - 00:07:35:11
Doug
I also spent half my career in e-commerce, CTO Travelocity, a booking website in the US for travel. I spent 5 years with Southwest Airlines as a director of enterprise data and analytics. Did a lot of good work there in the areas of crew scheduling and fuel inventory management, very significant economic impacts. And then the last 5 years I've spent with Walmart, my work is centred in fulfillment and supply chain.00:07:35:12 - 00:07:59:01
Doug
And my teams over the last five years have added about over $1 billion in economic value, taking cost out of the supply chain, making it more efficient, which is critical. And lastly, I have spent the last 8 years teaching also as an adjunct professor at Southern Methodist University in Dallas and lead students and executive MBA programs, where they do real-world projects as well and deliver value back to their organisation.00:07:59:01 - 00:08:21:14
Doug, Kate, Evan
So very, very happy to be here and share the message from the book. [Kate speaks] That's fabulous, thank you. And Evan. [Evan speaks] So my side, I'm, as you mentioned, a local, grew up in Brisbane and did a lot of mathematics as part of my background and study. Left Australia just before my 22nd birthday to go to Germany and complete a master's and PhD in applied mathematics, specifically game theory.00:08:21:15 - 00:09:01:10
Evan
After finishing that up, met my beautiful wife there and we decided to travel up to Sweden to start a family. Worked there for six years with the Fraunhofer-Chalmers Institute building algorithms that effectively run the robots on the production line, so pretty much every automotive manufacturer in the Western Hemisphere. Then in 2015 came back to Melbourne, Australia, and spent about four years working with a lot of startups and sort of cutting-edge technologies, eventually moving back to Brisbane with the head of analytics role with a company called Biarri that builds artificial intelligence solutions, in particular in the space of optimisation and operations research and decision-making, and then finally took00:09:01:10 - 00:09:26:05
Evan, Kate
on a managing director or recently with the company Ubidy that is a global platform that helps organisations manage recruitment. And I kind of get to have a lot of fun there, leading a lot of the AI efforts in that space. [Kate speaks] That's fabulous. And as you can see, we're really fortunate to have two leaders who not only have worked in industry to implement these projects, but have also worked within the academic sector and undertaken research and training.00:09:26:05 - 00:09:52:07
Kate
And I'm sure we're going to benefit from the breadth of that experience. And before we dive into the question of why do so many AI projects fail, I'm going to keep the audience on tenterhooks there for just a little bit longer. Evan, can I start with you? I think it would really help the audience if we could start with a shared understanding of what we mean when we talk about AI and what kind of AI projects in particular do you mean00:09:52:08 - 00:10:13:05
Kate, Evan
and what practically have you seen people implement? [Evan] Yeah, really good place to start. When we think about AI, I was part of a podcast a little while ago, and I was probably a little bit relaxed with my response to that question, where I basically kind of said, look, I think the way the public's taking this is, it's any algorithm that takes some sort of inputs and has some sort of outputs and does something intelligent along the way.00:10:13:05 - 00:10:37:02
Evan
And I was just like, look, I'm happy to roll with that. That's the way society is going. I'm not going to fight it. But if we take a step back and in particular, you've mentioned both Doug and my academic backgrounds, and if we look at it from kind of a purist perspective, what AI really is – it's the development of theory and computer systems that try and do things that humans do, and in particular in a way that they learn from data and then are able to produce outputs based on what they've learned.00:10:37:02 - 00:11:15:02
Evan
And typical applications include computer vision, natural language processing, translation, speech, audio, all these kind of things where what we're doing is we're looking to generate typically generates something based on a set of inputs. If we look at the different applications of AI nowadays, and in particular what gets rolled up into that, and from a purist perspective, a lot of the work I've done in optimisation isn't really AI. It kind of sits outside it, but nowadays I think from the public view that's in there, right in the middle of it. What we're seeing is really a broad application across both industry and society nowadays.00:11:15:02 - 00:11:34:01
Evan, Doug
And I think society is really drawn in by things like generative AI, ChatGPT and things like that. But as far as organisation is concerned, and Doug's got a wealth of experience in this space is, especially the operations research, we've been doing from like the '80s. Right, Doug? [Doug speaks] So yes, absolutely. So complex decision-making is an area that I have focused in.00:11:34:03 - 00:12:14:06
Doug
And I'll talk more later about what Walmart does. We use robotics. We use computer vision to inspect products as they're flowing through the DC to look for defects. Also inspecting our vehicles to see if they're damaged or their seal is broken. You can use computers to actually visually inspect packages and vehicles, but complex decision-making through a method we call interactive optimisation, where there is a machine-learning model that is also getting insight and input from the human, because humans are very good at exception handling and the nuanced decision-making, where the computers are very good at handling large volumes of information, literally millions and millions of variables and maybe00:12:14:08 - 00:12:39:00
Doug
billions of data points. So, that that application is known as narrow AI. So we're almost exclusively focused on narrow AI as opposed to general AI, where the computer system had exceeded the human's capabilities. And that's, that's the other area. It's not as practical really, for business, but there are myriad practical implementations of narrow AI. [Kate] I think that's an incredibly helpful overview.00:12:39:00 - 00:12:56:07
Kate
And I think it just also highlights why it is so important to the public sector because, obviously, complex decision making is essentially the core task of anyone in the public sector, and it has decisions that cover the breadth of the country and that affect a number of people and often have many different data points. But, as you said, there's a real nuance to that decision-making.00:12:56:07 - 00:13:13:15
Kate
And that, you know, absolutely does require humans in the loop as you do it. Doug, can I turn to you now? You're clearly a man without a lot of time on your hands. You have a big day job at Walmart, you know. You're teaching in Texas. What motivated you to also write a book? [Doug] Right.00:13:14:01 - 00:13:37:03
Doug
I appreciate that. When I started seeing a lot of articles about 5 years ago about failure rates, Deloitte published a study with Tom Davenport, who's one of the gods of analytics, that 80% of all data science projects fail. The numbers range across mature and immature companies from 40 all the way up in the 90s. That, fortunately, had not been my experience.00:13:37:05 - 00:14:06:09
Doug
I have my share of failures, but it's more like the opposite – 20 to 30%. And I really wanted to understand why projects were failing and what could we do to help people be more successful. That's really the motivation for the book, and I started this literally with a top-10 list, the top 10 reasons that I could identify, and then shared that within Walmart, shared that within the analytics communities publicly in the US. Got tremendous feedback and people said, you really should write this down.00:14:06:10 - 00:14:34:07
Doug
So I wrote a series of articles on this topic. Evan saw those and Evan said, you know, I think we could really do a lot of good to help people avoid failure. We have a phrase in the book: 'failure is feedback'. And there's oftentimes a connotation that failure is bad. We're, you know, we in our culture, in Walmart and in the US and especially like Silicon Valley, you know, we fail fast and learn and we want to learn from those failures.00:14:34:08 - 00:14:59:01
Doug
And that feedback is really what we're trying to share in the context of real-world stories, backed up by more data and analysis to validate those top-ten reasons that – we actually came up with two more, leadership and empathy, which you wouldn't think that empathy would be the topic inside a science, but human interaction and human capabilities and organisational behaviour really are at the core of why a lot of projects fail.00:14:59:02 - 00:15:33:09
Unknown
We'll talk more about that. [Kate] I think it's interesting insight, and obviously empathy is so important to design and design thinking, which is, you know, often a really critical part of any analytical project or service delivery. So, I think it's wonderful that you're pulling it out. Evan, I'm going to turn to you now, as you've discussed and as you kind of teased to the audience, many AI projects do fail, but you've identified those key reasons, those ten plus two reasons why. Can you share with the audience what they are? [Evan] So when we did the initial research behind this and there was a lot of work, both me and Doug undertook to00:15:33:12 - 00:15:59:11
Evan
understand this, because one of the things that we definitely didn't want and it was something that was important for us, was not just to come up with our opinions. And so we actually spent, between the two of us, time invested, almost about a year, going through and reviewing hundreds of blog posts. We did a meta-analysis of thousands of peer-reviewed articles, looked at podcasts, videos and everything like that, and we just collated this massive sheet –00:15:59:12 - 00:16:22:08
Evan
all the reasons why data science projects fail. And probably in that sheet I would, I would suspect we have more than 100 there. But what we realised is just listing out all those 100 just isn't going to be helpful for the reader. And so, as Doug mentioned, we kind of did an 80/20 thing and cut off what are the major reasons, and how can we help the reader focus on those kind of things to drive forward and improve their chances of success?00:16:22:08 - 00:16:45:06
Evan
As Doug mentioned, when you look at organisations, and this will help unpack kind of the reasons, you've kind of got two ends of the spectrum, so to speak. You've got the analytically immature organisations. We just kind of leverage language used by other well-known researchers in this space. And you got analytically mature organisations, and on the analytically immature end you've got like 90+% failure rate and analytically mature, you got 40%.00:16:45:08 - 00:17:03:03
Evan
One of the things that we added in that book was actually splitting that up. Most people just took about 80%, 60%. We just said, we know that's not the case, depends on the organisation. And then we unpack exactly, to your point, what are kind of the reasons that you can understand to chip your way towards that 40% failure rate to improve over time.00:17:03:04 - 00:17:26:00
Evan
And it begins with things like strategy, people, process, technology. And I guess as technologists we were kind of coming in with the mindset, okay, what are all the technology reasons? What are the new techniques you can use to improve this? But, as Doug mentioned, kind of the big things that we discovered was most of this isn't technology related. Like, most of this is to do with other things, like not having a good business use case.00:17:26:02 - 00:17:44:11
Evan
And given that we're talking to the public sector and to the government today, I think one of the challenges that face, that organisations and the public sector face, is this resisting the temptation to react to the hype, because we saw that as one of the major challenges, not having a business case, but looking at some technology, like they really want to use00:17:44:11 - 00:18:06:06
Evan, Doug
it, was just one of the major ... in fact, I think it was the number one cause of failure, right Doug? [Doug] That's right. So understanding the business problem at hand and then tagging that together with a business target. In the private sector, we have no lack of key performance indicators, profit and things. The public sector has their own metrics, whatever those might be.00:18:06:08 - 00:18:31:02
Doug
But understanding the business problem, getting the data that you need, tying it to a target, tying it to a project or program that people care about. [Kate] Yeah. [Doug] That's really going to be beneficial sometimes. As technologists, myself included, we get a little bit wrapped around the axle. I've seen companies glom onto things like computer vision. And their first question will be, we need to go do a project with computer vision regardless of the context.00:18:31:03 - 00:18:55:02
Doug
Absolutely wrong approach. [Kate] Yeah. [Doug] Because you really are a solution in search of a problem. You really want to start with the problem. Start with the value target that you're seeking to improve upon, and then figure out what are the right technologies, of which there are myriad. You know, we have no lack of computing power, no lack of algorithms, but really focusing in on things that the people, the government, the constituents care about.00:18:55:03 - 00:19:28:04
Doug
And then lastly, one of the biggest reasons that, it's our, it's my number ten on the list, is moving from model in a sandbox in the cloud to production. That can be an order of magnitude or two. As you mentioned, the word complexity – complexity, cost, resource consumption and time. There was a study done at MIT that we quote in the book that literally it can take two orders of magnitude to get from a model that you wrote in an afternoon on your Jupyter Notebook to a full-fledged production scale system.(Added Doug to speker notes)
00:19:28:05 - 00:19:48:03
Doug
And data science projects become IT projects. Yes. We've been through that process many times, and that's where a lot of them end up in the ditch, so to speak, that we're trying to avoid, to be conscious of that complexity, cost and resource. [Kate] We probably managed to deeply scare everyone listening who can probably relate to some of those things you discussed.(Added Kate and Doug to speker notes)
00:19:48:07 - 00:20:11:07
Kate, Doug
Is there some advice that we could give to the listeners about how to avoid falling into those traps, beyond defining that business problem, being clear on the outcome of the metrics, and then working out a path to doing it. Are there any other lessons? [Doug] That's a great question, and I believe that doing data science and AI is similar to playing an instrument or learning a new sport.00:20:11:08 - 00:20:35:11
Doug
You have to practice. It's not something that you read about. It's not something that a Coursera class or a Udemy class – that'll help you learn the technology. But we have to do it. And I went back, I'll go back to what I said earlier about fail fast and learn and experiment, and actually doing the work and doing the practice of data science and applying it and figuring out what went right, what went wrong.00:20:35:13 - 00:21:15:12
Doug
And that's the only way you really get good at it. I spent half my career in analytics, data science and AI and the other half in software and e-commerce and I was very fortunate to be with people that were better at it than me. So if you can find some experts, either if you don't have experts in your domain, somebody in academia, somebody maybe in a consulting practice, that can help, if you're looking for hands-on doers, it's actually great to go to university and universities, do capstone projects, internships, students that are craving experience, that know the hands-on tools and technologies but benefit from the business subject matter experts.00:21:15:13 - 00:21:40:03
Unknown
So bring people together that are subject matter experts, non-STEM, we would call them, and then STEM resources. That's really where that partnership really takes off. And those folks have to come together as a team and kind of learn together and work together. [Kate] And has that been your experience as well, Evan? [Evan] Yeah, absolutely. I think one of the big drivers and reductions of failures, I'll be honest with you, I think it really starts from the top.00:21:40:04 - 00:21:57:00
Evan
And that comes to Doug's point about failing forward. If the top doesn't allow the ability to develop the knowledge and understand, then it's not going to rise. And a lot of this stuff I find, because of the hype, it warps the perspective. Let's take a step back, just momentarily, and look at an industry at the core of AI.00:21:57:00 - 00:22:17:08
Evan
And that's the CPU, the chip that does a lot of the computations or even the GPU nowadays, it's more than the CPU. And if you look at that industry in the United States, where it really grew and created Silicon Valley, it effectively created the whole the foundation of Fairchild Industries, Intel, AMD, Nvidia and all these other organisations.00:22:17:10 - 00:22:38:03
Evan
They didn't arise overnight. They started kind of in the '50s, '60s, '70s as they tried to move from vacuum tubes down to transistors and replace them and develop the chips that we now use for all our computing purposes nowadays. It took a long time, and it built that kind of expertise around it that people looked to that area and went, wow, isn't it amazing what's going on?00:22:38:04 - 00:22:55:08
Evan
But I don't see that other places can't develop that too. And, in particular in AI, if we look at that model and have the patience to say, look, this is a long-term game. We need to invest in this. We need to be looking at this in the long term. We don't want to react to the hype and start building tools and fail and say it's all snake oil.00:22:55:08 - 00:23:16:11
Evan
It's all which, which I see happening. I see organisations react to the hype. They say this AI stuff's nonsense. It's not real. It's overblown. Where what the unfortunate case there is actually more about you lack the maturity and capabilities to deliver on that. You need to set that long-term vision to stay the path. And even in the public service, there's already a lot of AI capabilities.00:23:16:12 - 00:23:35:02
Evan
I know this for a fact. I've done projects with the government in Australia, and it's about leveraging that, like Doug said, bringing it in and building those kind of perhaps communities of practice, tell those stories of success, and kind of build from there, not getting caught up in the hype, continually failing and then going, this thing doesn't work.00:23:35:04 - 00:23:58:02
Kate
[Kate] Yes. And can I, perhaps draw on some in-field practical experience here, Doug? It's one thing to get that model right at a project level. But, you know, we were just talking and I think Evan alluded to it really well, the real key is being able to have a vision as an organisation and to build that capability throughout the organisation and to build it over time00:23:58:02 - 00:24:27:03
Kate
so you're not just applying it to a single project, but you're kind of constantly learning and building on it, improving. What have you found practically in trying to do that at Walmart, and what lessons do you think you have for the listeners today? [Doug] It's a great question, and organisational learning is really key. When I started my career at American Airlines, we had one small group of 40 people that serviced the entire airline that grew to 500, and that was a centralised model.
00:24:27:04 - 00:24:51:09
Doug
And now what we see in the last two companies I've worked with, Southwest Airlines and now Walmart, data science and AI is becoming diffused. I'm in Walmart Global tech. We have literally hundreds of people within Walmart Global Tech that serve the business. But the business units also have their own people in their own domain areas, because merchandising is different than product planning is different than supply chain.00:24:51:09 - 00:25:16:07
Doug
So you really need to be embedded and understand the domain that you're in. And then I would also say that we've got an organisation that's literally called the Data Science Center, excuse me, the Data Science Community of Practice Center. And it's data scientists across all of those different groups in the business, in technology, getting together monthly. We share successes.00:25:16:08 - 00:26:01:01
Doug
We share failures. We share data sets that we've discovered or curated that might be useful. We share models with each other. One of the big principles at Walmart is reuse. So once you build something, let's not have three other groups invent the wheel. An example – my group has built a forecasting system. We forecast every container coming from overseas, we forecast cases per trailer, trailers, the miles that those trailers are going to travel, to figure out the right number of drivers and tractor trailers that we need across our entire network of 4500 stores and hundreds of distribution centres. That forecasting as a service system has been created into a platform that's now being used by other00:26:01:01 - 00:26:26:06
Doug
groups. So they literally just have to take their data, put it in the right format, drop it in, and the best model possible comes out. So even if they're not super sophisticated at forecasting, that reuse really helps to ensure an increased probability of success, get projects up and running faster, avoid reinventing the wheel. And that way people can just get on with the business of getting to the business value.00:26:26:07 - 00:26:50:07
Doug
Our CTO talks about fastest time to business value. Projects typically can take, and people ask me all the time how long the projects take. They could take a month, they could take six months, they could take a year. We'd like to contain them to a year. And that's actually pretty quick at Walmart scale. But that level of reuse and sharing of success and failure across the organisation is really key.00:26:50:08 - 00:27:11:06
Kate
[Kate] I think they're great insights, and I'll note that we are getting a fabulous volume of questions coming in from the audience. So please do keep them coming, and I'm going to throw to those questions probably in the next 5 to 10 minutes. So if you are thinking about posting one, now is your time, and it will be first in, best dressed in terms of which questions I ask.00:27:11:07 - 00:27:34:09
Kate
Evan, can I turn to you now? You know, you mentioned that you've had a career where you've worked right across the private sector overseas and in Australia, but you also have had some experience working with the public sector department and seeing what they're implementing. Have you observed if there's any unique characteristics about the public sector in terms of either the opportunities they face or some of the risks that they have to manage and any lessons that you've learned from that?00:27:34:13 - 00:27:52:03
Evan
[Evan] Yeah, absolutely. I think there's two main topics that I'd like to touch on here. And one of them was a project I did with the public sector many years ago. And we sat down with the team and we started looking at the data set and started exposing a number of insights there that could be very useful for their decision-making.00:27:52:04 - 00:28:18:01
Evan
When we presented those insights to the team to take back to impact policy and other things, what we noticed was a reticence really to take up the insights that we'd pulled out of there and make decisions. And my feeling from that interaction was this concern about failure. Even if these insights were there, given that these people, they were domain experts like Doug was talking about before, you have to have the domain experts in the room.00:28:18:02 - 00:28:36:00
Evan
And if they're not comfortable with the data-driven way of making decisions, then putting your faith in that data and those insights is very challenging. And I totally understand that. And so there was almost this, this reticence to use it just because there's a potential failure, and because in the private industry, like Doug said, you can fail and learn from it,00:28:36:00 - 00:28:53:11
Evan
and that's a positive thing, in the public sector, this inability to fail because of the impact it can have on your career and in particular, when many projects are long projects, there may be four-year projects, and the way a lot of funding works in government is you don't get a lot of funding for small things to trial stuff.00:28:53:11 - 00:29:12:03
Evan
You get funding for big stuff or small stuff because small stuff isn't something it's always announceable. It's a challenge to actually announce, hey, I'm doing this little thing that's tidying up this stuff and improving this, whereas you can announce the big things. And so that makes the type of environment a lot more challenging. And I'm not saying it's right or wrong, it's just a matter of fact of the way to operate.00:29:12:03 - 00:29:37:11
Evan
And so, a lot of these agile frameworks that Doug talked about, I worked with an organisation in Melbourne that did a complete agile transformation. So what they did is exactly what Doug mentioned. They went from this siloed approach to an approach where people span boundaries. And you had these chapters where people would come together with some level of expertise – IT, finance, analytics – but then they'd be embedded in a in a particular line of business to produce the insights.00:29:37:11 - 00:29:53:15
Evan
And so, in the government, I think, I definitely think that's a challenge. And I think that's something that needs to, again, come from the top. The top says, look, we do want to promote this. We're going to create these innovation centres of excellence or whatever, where we can trial ideas. The announcement is that Innovation Centre of Excellence, which gives a big bucket of money.00:29:54:00 - 00:30:17:06
Evan
The implementation is small projects across government where people can fund these POCs. But that then comes to my second point, which is exactly what Doug mentioned at the start, that scaling up to production. So typically the POC level, when you're applying artificial intelligence, it's often possible to make that single POC a really well-defined, specific task. And that means it's amenable to what Doug mentioned –00:30:17:06 - 00:30:36:15
Evan
narrow AI. So narrow AI is typically a good use case of a specific algorithm or set of algorithms to one task that's well defined and well understood. Now, what happens when you go from that proof of concept to the real world, exactly as Doug mentioned, you move into the space of complexity, right? You're moving now into integrating then to organisation. Oh, wait a second.00:30:37:01 - 00:30:56:12
Evan
That's going to impact this person's job. They're now ... their job and they might have their own ego at stake where they're going, well, if I do that, that's going to take away responsibility, which is going to reduce my chances for promotion. Then there's a huge change management piece in that kind of area. And I think, in big organisations, right, Doug, that must be something.00:30:56:12 - 00:31:25:07
Doug
that's pretty ... [Doug] It's a great ... it's a great example. It's a great lead in to ... the first system I worked on at American Airlines was involving aircraft maintenance planning and scheduling. So the aircraft had to come in for a big maintenance project, it's $1 million to basically overhaul the aircraft. And when the fleet grew from 200 to 600, they used to do this on a big sheet of paper with coloured pencils, if you can believe that. They would literally schedule on sheets of paper.00:31:25:10 - 00:31:46:02
Doug
Then they moved to Excel, and the Excel system crashed. They brought my team in, and we basically replicated the same algorithm and the same decision-making they were doing, but we got the computer to do it. And what used to take them weeks to come up with a mediocre plan with the score of, say, 80 out of 100, we call it the yield on the check.00:31:46:04 - 00:32:13:06
Doug
We got the yields up to 99%, and we were able to reduce the time from weeks to minutes to generate maintenance plans – in under 20 minutes for the entire fleet. And to Evan's point, that change management problem was significant. The people that were involved that were doing this manually came to me and put me in to pull me into a conference room and said, 'You're going to put us all out of work with that computer program of yours.'00:32:13:07 - 00:32:32:12
Doug
And I said, you know what? In Texas, I'll bet you steak dinner that I'm not going to do that. And, in fact, you're all going to get promoted. And they laughed and, a year later, their maintenance plans were basically removing the curtain on the future that that you could see out 5 years, generate better plans with better yields.00:32:32:14 - 00:33:04:02
Doug
They all got promoted. They all became trusted advisors to the executives because we were able to avoid $450 million in unnecessary maintenance cost, which is over a billion in today's world. That was 30 years ago. So 30 years ago we were doing this. But that problem of change management is so critical. You've got to bring people along with you because, even though they're doing the same job, they're doing it in a very different way, and they're not spending all their time crunching numbers.00:33:04:03 - 00:33:26:04
Doug
But that goes back to that example I gave earlier of interactive optimisation. That algorithm would run interactively, where the human would change some inputs, run a model, get a better solution, run a model, get a slightly worse solution, constantly iterate until they got to a point where they could do that in 20 minutes at a clip. But the economic impact was significant.00:33:26:04 - 00:33:50:09
Doug
The business impact was significant. There were even some side benefits where we had created so much capacity, we brought third-party maintenance work in and did that work at a profit. We brought in work that had been outsourced at a higher cost, and did it in-house to save some money. We actually took an entire year of one maintenance line, shut it down and put an aircraft back into revenue-generating service.00:33:50:12 - 00:34:16:05
Doug
That's an eight-figure number. So that one project and, to put it in perspective, myself and one other person for six months to build the first version and then deliver it in less than a year, generated those kind of benefits. So it was a phenomenal outcome. But, from a failure perspective, it came that close to failing many, many points along the way – many points along the way.00:34:16:08 - 00:34:37:15
Kate
[Kate] And I think it's a fascinating example – this is going to be my final question before I do throw to the audience questions, I think, is that example you alluded to, Doug – there is often a lot of fear and concern and sometimes, you know, rightfully fear and concern for people working in a job who worry about how the introduction of an AI technology will affect their job.00:34:37:15 - 00:34:56:08
Kate
And in fact, to the point of, you know, will it make them redundant, by automating their tasks, like what is your take on the likelihood of jobs being impacted by AI? And do you or Evan have any advice for people in terms of, you know, how they might adapt in those situations and what skills they might want to focus on acquiring?00:34:56:11 - 00:35:22:15
Doug
[Doug] I would love to start with that. So, two studies recently released by Goldman Sachs and McKinsey both indicated that jobs, globally, 300 to 375 million jobs could be impacted, not lost, but impacted. And I think that there's an opportunity, obviously, for STEM people, we don't have enough STEM engineers to do the actual AI work. We need to continually train more of them.00:35:23:01 - 00:35:53:08
Doug
But for Non-STEM, there's an old sore. It's a new sore that's come out, but you may lose your job to a robot, but you'd more likely lose your job to someone that knows more about AI than you do. So you don't have to be a STEM person but, if you're non-STEM, being that product owner, that domain expert, the subject matter expert working with AI – that is the best job security for a non-STEM person in an AI world is to adapt and adopt and become part of the solution.00:35:53:09 - 00:36:18:04
Doug
I'll give you a very concrete example. A friend of mine is a product manager for a technology company. He had 4 colleagues. They were responsible for writing product descriptions. It takes about 2 to 3 weeks for a good product manager to write a curated, edited, fully ready-to-go product description. He started using ChatGPT to augment his ability to generate product descriptions.00:36:18:05 - 00:36:41:02
Doug
He reduced the time from 2 to 3 weeks to 2 to 3 hours. His 4 colleagues refused to adopt the new technologies – nope, we're going to do it the old way. We're going to continue to write. We're not going to use it. Those 4 were made redundant. So, that's it. That's a case-in-point example of you could adapt and leverage the new technology to get better and make yourself better, make your organisation better.00:36:41:04 - 00:37:06:02
Doug
There will be consequences for some people, otherwise. There are jobs at Walmart, frankly, that people don't want to do. That's where robotics comes in. Unloading and loading pallets in a warehouse. That's not a comfortable job to do. It's a very dangerous job, in some cases. So, we use robots in those examples not to replace people, but to do jobs that we really, really don't want people doing anyway.00:37:06:03 - 00:37:30:11
Doug
Even long-haul trucking. Walmart – we have 10,000 tractors, 75,000 trailers. Walmart offers very lucrative salaries to people to become long-haul truckers. They don't want those jobs. So self-driving vehicles are going to be a part of the future. We're experimenting with those things. So, it's not just job elimination; it's doing jobs that would actually be better suited to a non-human, to a robot.00:37:30:14 - 00:38:01:12
Unknown
But there are myriad opportunities to engage and become part of the AI solution. Working together with STEM people that are actually doing the science. And that's the best job security. [Kate] I think that's excellent advice. And Evan, I want to build on that because I've had a number of questions kind of come through, and there's one here that I think is a really good example of that dilemma that people often face when they want to implement an AI project and they're trying to do it sort of ethically, but they're also trying to figure out how to make it a success.00:38:01:12 - 00:38:24:03
Unknown
And this is, I think, the kind of example of a barrier that some might face if they were in that decision. They're trying to decide, do I become an adopter or do, you know, do I need to exercise a little caution about this change? So two questions have come in. One question, and they both relate to data, so the first question was related to how do you address the challenge of needing to keep data secure in the public sector,00:38:24:06 - 00:38:47:04
Kate
understanding that there's often a lot of very sensitive and confidential data that is shared, and there are very serious implications if that wasn't used ethically or if it wasn't used in a secure fashion. But then the, the countervailing question that we received was from someone who said that the quality of that decision-making, the quality of those AI systems, is so inherently dependent on the quality of the data and the representativeness as the data sets,00:38:47:04 - 00:39:06:02
Kate
and very often getting access to those raw data sets can be challenging. And they were asking, how do you incentivise and encourage people to make it more available? But as you can see from that first question, there are equally, you know, valid concerns about how do you ensure you're not making it too available? So how do you, you know, in your experience, how have you found the balance, with that kind of question?00:39:06:03 - 00:39:27:00
Evan
[Evan] Yeah. Look, it's a fantastic question. And if we just take a step back and think about my education. So what I went through is I studied a dual degree at UQ and I did mathematics majors in both them, then a master's and a PhD, all focused on mathematics and algorithm development and stuff like that. At no point along that journey did I study cyber security.00:39:27:01 - 00:39:51:09
Evan
Like, I have no background or understanding in cyber security. And I think if the AI team were to walk in the door and, and then be responsible for cyber security as well as developing the AI model, I think you're in trouble. So I think from, from a data-protection perspective, that kind of sits squarely in the IT sphere and within the IT team, they're going to typically define the protocols and policies around how they maintain and protect the data.00:39:51:09 - 00:40:08:14
Evan
That's clearly going to be informed by things like our Privacy Act here in Australia or, if you're global, GDPR in Europe. And what that will do is it'll define the framework for how you're going to manage and protect that data. Now back to the point of how do we then take that and do AI off the back of it?00:40:08:15 - 00:40:31:08
Evan
I think this kind of sandbox-type of cordoned-off environment to begin exploring whether there's any value in the first instance because, as Doug mentioned, your North Star should always be what type of value can I create here? And if you're running a project that's going to create maybe $1 million of value or $2 million value, but to secure that data long term and get it working, it's going to cost you 10 to 20 million,00:40:31:11 - 00:40:55:05
Evan
you should just stop. You should just stop straight away and you shouldn't even begin. But if you can see a huge opportunity because data science isn't a given, because data science has that element of creating a hypothesis and testing it, the first step really is to create, exactly to their point, a smaller subsample of that data to begin testing ideas on and playing around with.00:40:55:06 - 00:41:19:05
Evan
And typically when you're doing that, because it's not the full data set, it's often possible to mask and anonymise that data in a way that means it's safe, were there to be a cyber security breach, and I know most people, my friends at work in the cyber security place, they say it's not a question of if, it's a question of when, then at least that when, when it happens in that sandbox environment, because the level of security there is going to be much lower because you're letting people like me play with it.00:41:19:06 - 00:41:40:00
Evan
I'm not a cyber security expert, and I might do things that that might create risks that that could be challenging if it were to be accessing the full data set, then that gives us that possibility to do it. Then once you've proven the value and I walk back to the business and I say there's a huge amount of value here, I've demonstrated it, then I'm probably going to have IT a lot more happy to work with me00:41:40:00 - 00:41:58:02
Evan
spending $2–$5 million setting up a way for the AI framework to access that data safely, and to be able to implement that and operationalise that in the business, which is what the cost is going to be. So I think it's a matter of, first of all, really beginning, exactly to Doug's point at the start, where's the business case,00:41:58:02 - 00:42:26:07
Unknown
where's the value to convince the business is worth investing money to protect the assets? [Kate] I think that's a really, really helpful and really genuine answer to that question, which I know is one that people do often struggle with. I've got another theme here that's come through in the questions, Doug, that you may well ... it may kind of be very close to your own experience, which is around the role of experts when it comes to AI projects.00:42:26:07 - 00:42:55:12
Kate
And so the first question that has come in, and this is the one that made me think of you, noted that a lot of the time tech experts and, you know, there's a limited pool of people who have, you know, genuinely deep expertise in data science and AI. It's an area of great skill shortage in Australia. So the question is, those experts often get called upon to spend an incredible amount of their time explaining and educating other people in the organisation about what AI is, and you know, how things work.00:42:55:14 - 00:43:19:08
Unknown
And but they noted that there's a trade off there because it means there is less time for them, in a way, to be building those secure solutions and, you know, working with technical teams, perhaps on things like cyber security considerations. So the question is, like, how is there a kind of team project organisational model that you found that gets the best balance of incentivising the collaboration between different people and allowing for change management, but doing that productively?00:43:19:09 - 00:43:43:01
Doug
[Doug] So the project that I mentioned earlier and another project that we did at Southwest Airlines, building a real-time airline operations control system, they were literally at the opposite ends of my career, the kind of the beginning and just recently, both projects used what we call 4 in a box. That may be a common term, maybe not 2 in a box, 4 in a box.00:43:43:02 - 00:44:15:11
Doug
Four in a box means we literally have in the room a subject matter expert or more, potentially, one of that group's represented; a data scientist; a data engineer; a systems engineer; and then maybe a product manager that's supervising the whole project. But that's really the crux. They literally sat in the same room. So when I worked on my project, I literally went to the maintenance base in Tulsa, lived there for 3 months, and we all sat in a conference room together.00:44:15:12 - 00:44:47:14
Doug
When we did the project at Southwest, it was literally 4 in a box. Interesting story. The project was named posthumously after the gentleman that came up with the idea, Mike Baker. So we called the office 'the bakery'. And the bakery was where all the action happened. And that's where the 4 in the box ... That way, if the data scientist has a question about the data, or the data engineer has a question about the subject matter expertise, there's no emailing, phone tag or running to another building to have a meeting,00:44:47:14 - 00:45:13:15
Doug
scheduling a meeting. It's all in real time. I can reach across the table and talk to that person. That kind of unit cohesion, if you will, of having these different disciplines literally in the box together, it's instrumental to the effectiveness of the organisation. Now, that's not always possible, but whenever it's possible to physically co-locate people. But you've got to have those, and maybe more, expertise. You might have,00:45:13:15 - 00:45:37:15
Evan
you might need a data governance person in the room, a data security person. It might be more than 4 in a box, but whenever they can co-locate and work together to expedite the question-and-answer and the delivery process, that dramatically increases the likelihood of success. [Evan] But if we come to that point, just to look at it from an organisational perspective, having people do that is a reasonable investment, right?00:45:38:00 - 00:45:55:14
Unknown
[Doug] That's right. [Evan] I think, if you put it and to get the company committed to that and stick with that investment over the ups and downs, because, you know, as you mentioned with the other projects, they often come quite close to failure. The central theme of our book is that you've got to somehow, back to your point, have done that groundwork of showing the business case.00:45:55:14 - 00:46:14:13
Evan
There's going to be huge benefit to your investment of those 4 people over 6 months ... is going to be worthwhile. And it kind of keeps coming back, for me at least, and that's kind of a central message in the book, of that make sure the business case is there. Don't be reacting to the hype, be ... have that resolution to have those people stay long enough, I guess, especially in these projects.00:46:14:13 - 00:46:34:10
Unknown
I think in the, in the book, you mentioned one of these projects took, what, 5, 6 or so, 8 years? [Doug] Eight years. [Evan] You hear those people? [DOUG] Yeah. From the time the original model, in Xijs and in math, was written on a piece of paper until the system turned on and went live, it was 8 years. [Evan] And we've got another example.00:46:34:10 - 00:47:11:00
Unknown
So, sorry to jump in there, but I think UPS's dynamo system, right? Right. Yeah. The Dynamo system ... actually, I'm sorry, it's Orion. [Evan] Oh, right. [Doug] American's, American dual management system is dynamo, but Orion is UPS. Our United Parcel Service delivery company in the US has a system on road integrated optimisation, navigation for Orion. They spent 10 years and $250 million to build the system that routes all 55,000 of their drivers to have shortest routes to deliver all their packages.00:47:11:01 - 00:47:42:14
Doug
So every night they run 55,000 routing algorithms, one for each of their drivers every single night. It cost 250 million over 10 years, but every year it saves 400 million in missed deliveries, fuel costs, CO2 emissions, driver costs. And they also learn some interesting rules. Now, this will be different here in Australia because of the driving patterns. But, in the US, we learned that you never want to make a left-hand turn as a delivery driver because you're sitting waiting for the light.00:47:42:15 - 00:48:07:02
Doug
Just make all the right turns. That's where that came from. That system always optimised just to, if you missed it, go around the block, right? But the economic impact and the operational impact was so significant. And that system is mounted in the truck on an iPad, effectively, now. It was a big, clunkier device when it first started, and it literally shows the driver the route and where they're supposed to be and where to drop the item off.00:48:07:02 - 00:48:37:03
Unknown
But, huge investment. But they're getting that back almost 2x every year that they've had the system. [Kate] And I think that example of why it is so important to invest in people, including through those co-located and multidisciplinary models, it really does go to the heart of avoiding those project failures. And there's like another theme that's come through in the questions Evan, that I'd love to get you to answer for all the listeners at home.00:48:37:04 - 00:49:12:01
Kate
Which is when you're really trying to build that capability and build that trust and that culture over the long term, are there other important elements to it, such as staff training and skills development beyond, you know, the opportunity to sit in a room and work on a project firsthand and, related to that, someone else has asked, you know, we often we hear all the time that an AI system is designed to learn continuously from its inputs and to sort of adapt the answers, sort of based on, you know, changes in the data and patterns detected.00:49:12:02 - 00:49:30:13
Unknown
But the question was, should humans be taking the same approach of learning continuously from those outputs? And again, how do you build both that training culture, but all that kind of continuous learning culture? [Evan] Absolutely. That's such a great question. We have a quote in the book, Doug, and I know I'm going to get this wrong, so you might get it right, from Gene ... is it Gene00:49:30:13 - 00:49:53:03
Unknown
Woolsey who talks about [Doug] Multiple quotes. Which one? [Evan] The one about a man would rather ... [Doug] Oh, he would rather live with a solution. Excuse me. The manager would rather live with a problem that they can't solve than implement a solution that they can't understand. [Evan] Exactly. And that kind of comes, I think, at the core of the question that you're asking around the training and skills development.00:49:53:03 - 00:50:11:10
Evan
And it comes back to the project I did here with the federal government around ... We came up with a whole bunch of insights and and revelations around how they could improve their services. But because that data-driven decision-making wasn't a part of the culture, nor the training, there was this, there was this chasm that they had to leap over.00:50:11:10 - 00:50:38:11
Evan
And they looked down in that chasm – at the bottom of that chasm was unemployment. They'd lose their jobs, sort of thing, and there's no way they're going to take that jump and that leap when, on the other side, it looked almost identical to this side. So, what's the benefits? Why am I going to leap over that? And so, when we looked at those kind of situations, to your point, I really think that embedding of that culture, again in the book, we talk about your time at American Airlines, Doug, and I think you mentioned that one of the greatest drivers of success there was at the top.00:50:38:12 - 00:51:01:12
Evan
They drove this kind of analytics culture, and that was probably people across the organisation understanding the value of it, but expressing curiosity and interest to develop their own skills in this space. I've met here, in, I met a good friend of mine in Santos, and he was originally a guy that worked out in the production, out in the fields, and he understood the fields extremely well, and at some point he had this continuous itch to go00:51:01:12 - 00:51:19:09
Evan
I've got to be able to do this better. And he, somewhere along the journey, met someone who's talking a bit about data science. He said tell me a bit more about that. I think we've got a heap of data that we can use to improve the operations, and so he started teaching himself data science. Now he's become an essential piece of the organisation because he spans both of them.00:51:19:10 - 00:51:42:03
Evan
And I think when you come back to training, I think there's a huge amount of value to be had to upskilling people into leveraging data-science technology, that continual training and improvement of skills. I don't think you need to do a PhD. I don't think you need to do a master's in any of this, but I think there's a lot of basic stuff and things that, given we're here with QUT, that organisations at QUT can add and help train businesses.00:51:42:03 - 00:52:06:13
Evan
And I think, in particular, one of the big things, and as a mathematician that I get a lot of pushback on, is everyone hates maths from school. And this story would take too long to tell, but one of the best stories often tells about my younger brother, who's a chef, and he did really poorly in mathematics in high school and he went over and represented Australia in the Pastry Olympics and they scored sixth place, which I think was the best outcome for Australia ever in the history of the nation.00:52:06:14 - 00:52:26:12
Evan
And while he was over there, he discovered this book that was from one of the masters of ice-cream makers, and it was rumoured to exist. No one believed it did until he found this book from this Italian master. And what was special about this book was the way the Italian master described his recipes. He kind of talked about, you need to get the fat between this range, and you need to get the sugars between these ranges,00:52:26:12 - 00:52:40:06
Evan
and you need to achieve this kind of outcome. And when I was sitting there reading it with him and talking about it, we're looking at it going, really, that's a linear program. That's a mathematical optimisation problem. And he said, really? He said, how do I do it? I said, dude, you hated maths. You barely scraped through it at00:52:40:08 - 00:52:52:07
Evan
high school. And he said, 'I don't care, I'm interested in this'. And it was a bit of an aha! moment when he said 'I'm interested this.' So we sat down and we framed the thing in Excel, and I said to him, 'Look, just to be honest with you, they don't teach this until third-year uni, right? This is hardcore maths, what you're doing.00:52:52:07 - 00:53:09:05
Evan
He said, 'Forget it, I'm going to give it a shot.' So, I left it with him. He took it away. A week later, he came back and he was solving these optimisation problems, working with the constraints, understood things like shadow pricing to understand the impacts of the changing the constraints on the optimiser, on the objective function.00:53:09:05 - 00:53:29:11
Evan
And this is a guy that flunked maths. And if I come back to the public service, and people are going, 'Oh, this AI thing.' I understand from a theoretical and kind of disconnected way of not applying it and seeing its usefulness in real life. It's hard to enter that space. But I think when people were bring these problems that they're facing in the government, the public service, I really want to solve this.00:53:29:11 - 00:53:46:06
Evan
And I go, okay, here's this machine-learning approach that you can use. Now they've got this connection between the theory and the practice, and I find, like with my little brother and others, that they really run with it. They're passionate about the problem. It doesn't matter how they're solving it, and if AI can be a core part of that, they'll do it now.00:53:46:06 - 00:54:02:04
Unknown
They'll go with exactly your point. [Kate] And I think it goes back to the empathy, though, that you talked about. If you really care about the problem and you're invested in getting it right and it's important to you, like you are much more likely to learn and be curious and take it seriously and push for failure. And I think that's a really fabulous example.00:54:02:04 - 00:54:41:06
Unknown
And, I would love to have brunch with your family if you had a talented pastry chef brother. I want to pose a question on a slightly different topic to you, Doug, that's come in, but I think it's a really, really important one. The question noted that, obviously, the use of AI can be incredibly energy intensive. And they said, how does an organisation think about that, both in terms of not perhaps enabling a whole host of really low-value, high-frequency use AI projects that don't really, you know, do anything but perhaps are having these other consequences, such as energy use.00:54:41:07 - 00:54:57:12
Kate
But also how do you kind of mitigate that? And how do you think about, you know, managing in the long term, the energy use and the choice of data centres you're using and things like that to perhaps also mitigate it, not simply through the practice, but through the way you're thinking about your own energy systems.00:54:57:12 - 00:55:23:08
Doug
[Doug] Right? That's a that's a very thoughtful and insightful question. In the US, you know, we've had this big push towards green energy and electric cars, and we realised that there just aren't enough chargers for everybody nor enough power-generating capability, right? It's the same with computing power. We all know that there is a finite amount of computing power that's far exceeded by what people want to do.00:55:23:09 - 00:55:44:09
Doug
So, first of all, capacity has to at least try to increase, which it is, you know. Related story, my son is studying construction at Texas A&M, and he's going to be working on a project building a new Microsoft data centre. It's going to be a very green facility. They're using things like geothermal power, hydro power to power data centres.00:55:44:09 - 00:56:12:07
Doug
There's even data centres, believe it or not, that are floating in the ocean that use wave power to generate the electric. So we definitely need more capacity. We definitely need new and more innovative methods that are friendly to the environment for generating the electricity. But I think, let's step back before that. I totally agree with not using this technology in such a way that it is capricious. And it's not00:56:12:09 - 00:56:40:03
Doug
let's just run off and do as many projects as we can. I think prioritisation is the key solution to the business, but to the electric power shortage, is be very parsimonious and very, very selective about the projects that you pick. Now, it's not in a power context that our CTO set this rule that he doesn't even want to hear about projects that don't meet a certain threshold of value.00:56:40:05 - 00:57:10:07
Doug
He doesn't want any resources allocated or those projects begun, executed, reported upon unless there's a tangible business value, economic impact associated with it. That needs to be the same threshold for projects before you start consuming power crazily, right? Another thing, it sounds very mundane, but it's very good discipline around cloud – turning on instances when you need them, turning them off when you're done.00:57:10:08 - 00:57:28:12
Doug
I had a student project, at one of my students did a project in one of my courses. I got them all signed up for AWS accounts. I told them how to do it, what to do, the tools to use. They went off and somebody came back and complained that they got a $500 bill on their credit card for Amazon.00:57:28:14 - 00:57:50:05
Doug
And I was like, well, did you go back and look at your logs? And it's like, did you turn off the servers when you ...? Turn them off? What do you mean, turn them off? They just left their servers. So, so that sounds mundane or even maybe silly or irresponsible, but you have to have good operational discipline. We have a goal at Walmart to reduce our cloud costs every year by 10%. Every single day,00:57:50:06 - 00:58:22:13
Unknown
what you will do, you will reduce your cost. [Kate asks a question that can't be heard] [Doug] Capacity is increasing, but we still have this goal to continuously reduce our costs, to positively affect the power problem that the questioner is asking – to be very judicious about it. [Kate] I think it's a really fascinating example. I don't know if you want to add to this point at all. [Evan] I think from the perspective of the compute, the way things are going nowadays is most of it's going to the cloud.00:58:22:14 - 00:58:44:05
Evan
So really, I don't have a choice as to how those cloud centres are designed. Like, that's not something I decide on. I don't, I don't get to tell AWS, Microsoft or Google how to build them or Oracle or anyone. What choice I do have is who I go with. [Kate] Yeah. [Evan] And you can see all the providers are now very strongly focusing on using renewable energies to power these data centres.00:58:44:06 - 00:59:09:15
Evan
The amount of power consumed by data centres nowadays is more if you combine it together more than many nations globally, especially smaller nations, in total. So it's a real, a real concern and a real challenge around how do we manage that. And so what I definitely do, and we do as an organisation, is we look at what is the cloud provider doing to to resolve these kind of challenges, because it's not something I can do much of except vote with my wallet.00:59:09:15 - 00:59:31:09
Kate
[Kate] Yeah. And it's, you know, pretty interesting to think about the role of government in that they're given, you know, in Australia the public sector is generally about 30% of the economy. It's a huge buyer. We're seeing the government really shift data-centre behaviour in areas like security and confidentiality and cyber security standards. So it's, you know, I think a very powerful lever that the government has as that major buyer.00:59:31:12 - 00:59:50:08
Evan
[Evan] I saw one fascinating project exactly on this process from a government perspective, was that there's a team up in Brisbane looking at AI algorithm training. The most intensive part of the compute is training an AI algorithm. When you're building a machine learning tool and you're trying to train the parameters of that machine-learning algorithm, that that's a whole training phase.00:59:50:08 - 01:00:12:10
Evan
Once you get to what's called inference and you're actually leveraging it, the compute requires a lot less. So what they were doing is they're trying to schedule AI workloads to happen in the middle of the day when solar energy is generating the most power, solar panels are generating the most power possible, especially in situations where those solar panels and that solar energy generation is causing negative prices on the grid.01:00:12:11 - 01:00:38:06
Evan
And so what they're saying is, well, we should put all that AI training in that point in time and trying to schedule it to meet these kind of challenge. Now, that may not always be possible from a business perspective. But back to your point with government, where if government does have these large AI workloads, there's a possibility of trying to leverage the natural variability of our renewables to be able to coincide with the energy requirements of these things.01:00:38:07 - 01:00:54:12
Kate
[Kate] And, you know, I think it's a really interesting question for Australia. You know, going on a little diversion here, just a sort of industry development economic question for us because, you know, we have a large landmass. We're one of the, we have abundant access into renewable energy sources, and we've been a leader in areas like solar and hydro and things like that.01:00:54:12 - 01:01:18:10
Kate
And then we've also actually developed a pretty strong capability around data centres. We saw one of the biggest sales, recently, of a company in the last few years was AirTrunk, where there was, which is a Australian-founded company which is now expanded and provides data centres right throughout the Asia-Pacific region. And so I think, you know, really thinking about that role of government in not only in leading good behaviour, but in understanding, it has the potential to drive a whole new industry.01:01:18:10 - 01:01:46:03
Kate
And I think, you know, I think about it, it's so powerful because so often in a technology transition, the complexity comes when you have so many different individual parts that need to be changed simultaneously. And one of the interesting things when you do have those centralised capabilities like data centres, if you can usher in that change at that central point, it kind of makes it a far simpler transition in terms of, say, energy sustainability rather than, you know, just a behavioural change within every individual organisation –01:01:46:03 - 01:02:04:14
Kate
not to diminish that at anyway, I wanted to kind of change tack a little bit here because this is, it's a question that's coming. I'm going to read it almost verbatim because I actually kind of love the honesty in the way that this question is conveyed. But I think it goes to a really important and serious point.01:02:05:00 - 01:02:39:12
Kate
So the question is, we kind of know that the human brain is complex and can be prone to brain farts. I don't know if that's just an Australian term [laughs while Doug says 'I've had a few'] but it's a .. and mental illnesses or, you know, biases in our cognitive decision-making. How can we guard against complex AI systems becoming susceptible to similar issues or, you know, having their own variants of limitations in the decision-making and answers and approach?01:02:39:13 - 01:03:17:14
Unknown
Or do we simply have to expect that of those systems and design, you know, our approach around that. [Doug] So I think there's two ways that I would think of approaching that problem. First of all, in building any kind of a model, machine-learning model, statistical model, optimisation model, specifically machine learning and statistical, there's a set of tests that you do to avoid what's called overfitting, which is saying making the model conform just to the data that we see, and then bias, which means that, you know, literally the statistical term 'bias'.01:03:17:14 - 01:03:39:04
Doug
And then there are other kinds of biases. You literally test for those things. And you constantly have to be observing and controlling and refitting and retesting to make sure that you don't get what's called data drift or model drift, where it starts drifting and it starts becoming biased. Or like, for example, recently, unfortunately, in the US we have multiple huge hurricanes.01:03:39:04 - 01:04:04:08
Doug
[Kate] Yes. [Doug] And in Florida, we have a dozen of our fulfilment centres, consolidation centres and distribution centres. We had to route all the product away from those facilities. That's going to skew the results then for the forecast for the next six months, because now you're going to have this drop because nothing happened. There was no activity. So we have to back that data out to avoid a bias in that particular instance.01:04:04:08 - 01:04:31:08
Doug
It happens all the time. And the only guard against it is the continuous refitting and checking and testing to make sure that you're not becoming subject to one of those things. The other thing that I would say is that, even if you're not an experienced data scientist or even a trained data scientist, there are systems now that are referred to as automated machine learning,01:04:31:09 - 01:05:02:15
Doug
or auto ML for short. And they're literally software packages that you can give your data to the system, and it will literally fit all possible combinations of every model of all the data variables that you've given it. And it will then score those models best to worst on predictive accuracy, bias, overfitting, etc. and it will even go so far to do what's called ensemble modelling, where it will say during this part of the year, this model's really better, this part of the year, this model's better,01:05:03:02 - 01:05:27:12
Doug
and even in this other quarter, these three models, when combined together with these weights, will give the best answer. It it helps you to avert a lot of that risk of bias and overfitting, and not having the best possible model. That's all been automated for you. So even if you're not an expert at building models, that modelling process has been compressed, but made better at the same time.01:05:27:12 - 01:05:52:08
Kate
[Kate] And I want to, I want to come to you as well [Kate gestures at Evan], because I'm sure you must have a lot of thoughts on this and a lot of insights for your career. But I just want to follow up on that question Doug and perhaps reference back to some of the other conversations we've been having today. I think that description of the way that you conduct the checks and, you know, your advice that there are tools available that are increasingly helping with the accuracy of that and thinking through is, is really sound.01:05:52:08 - 01:06:08:15
Kate
But then I, I'll just kind of push you a little and play the devil's advocate here. As you were saying earlier, it's, you know, there's an incentive for an organisation, a huge organisation to be reusing models to have, you know, huge numbers of teams doing it. And obviously those teams can be in all different parts of the country or even in different locations.01:06:08:15 - 01:06:31:14
Kate
They can have different levels of skills and access. And, you know, what might be the best way to do it doesn't necessarily mean it's consistently the only way it's done across the organisation. So how do you, how do you really, you know, are there any other techniques you can think of whether it's, you know, procedures or processes or training or other things that help ensure that that best practice is the most common practice?01:06:31:15 - 01:07:03:01
Doug
[Doug] Okay. So we all know from organisational psychology about positive reinforcement, negative reinforcement systems of penalties and incentives, right? As an example, I just became the leader of the AI Center of Excellence within our data organisation. And we identified, through a survey, 100% of all the GenAI projects that are currently going on. It was about 25 projects just within our area.01:07:03:02 - 01:07:33:08
Doug
And we said these 3 projects are actually the same project in different context. These 3 teams should not be working independently. Yeah, they should not be working on building everything from scratch every time. So we incentivised reuse by saying your project will be rewarded and given more visibility and given more resources, if you leverage with the other people are doing, and it will be penalised, it will be shut down01:07:33:09 - 01:08:02:01
Doug
if we find out that you are starting from scratch with a clean sheet every single time, because now you're literally wasting the company's resources. The challenge is, and I'm an engineer, historically, we want to write code. We want to build things. We want to show how good we are, but I eventually had to rewire my brain and rewire the brains of my people to say, we're not here to write code.01:08:02:03 - 01:08:30:15
Doug
We're here to solve problems and create value and create efficiency and economy and economics. It's not – the code is a means to an end. Yeah, the end is making the business or the organisation run more effectively, more efficiently for the constituents. And we literally do use incentives and penalties. Now, we plan to take this organisation that we've created and then roll that out to Walmart-wide, because our CTO has the same set of rules.01:08:31:00 - 01:08:54:13
Doug
We're just doing it for ourselves. But ultimately we're going to roll this out to everybody else because he talks about reuse. And we have another senior VP in our company; his mantra is platforms, platforms, platforms. It's like in real estate – location, location, location, right? He literally stands up in front of all our all-hands meetings screaming in a good way, in a fun way,01:08:54:15 - 01:09:25:02
Doug
'platforms'. What does he mean by that? We want to create one platform, build it once, have it run many times over so that we can continue to solve more problems and not continue reinventing wheels. And in organisation of 15,000 people, or 5000 people or whatever, you have to have those disciplines. You have to be disciplined. And that may sound like, without rules, there'd be chaos or whatever, but that's the way we enforce things.01:09:25:03 - 01:09:46:11
Unknown
It not only just at Walmart, but in general with my students. I tell them the same thing, but platforms, platforms, platforms, build one run many times. [Kate] I mean, I think that challenge of avoiding working an organisational silo is just it would probably resonate with every person listening because I think, you know, in government it's such, it's not only a single large organisation, it's multiple single large organisations, all part of one group.01:09:46:11 - 01:10:09:05
Unknown
And, you know, avoiding that risk is often [Doug] It's probably difficult across departments. I totally get it. [Kate] But it's very important for the same reasons. [Doug] I didn't think any of this was easy, right? [Kate] No, no. [Doug] But none of it's easy. But it just it's, it's actually quite challenging. And that's why the book on, you know, the failure. You just ... it's a very persevering practice. And it's no different than building a building or building a bridge.01:10:09:07 - 01:10:43:05
Doug
We, we note in our, in our book, one of the books that inspired me, there's a book called 'The Engineer Is Human' by Henry Petroski, and he talks about bridge failures and things where people leverage something that somebody had done before, but didn't fit my context, and the system failed. So just because there are systems like AutoML or just because somebody else built something, you still have to rigorously go through and check that it's still relevant in context and you're not misusing the platforms.01:10:43:06 - 01:10:59:13
Unknown
But definitely the reuse and incentivising the most efficient use of resources is very key. [Kate] And so what did you want to build on that? [Evan] Really quick comment, because we're probably running out of time here. Just to your point, Doug, I know in the audience, the person that asked that question is probably asked the second question to say, but wait a second,01:10:59:15 - 01:11:16:12
Evan
if you've got these systems for managing bias and variance and controlling them, then why do we still see cases like I think it was Apple Pay where they approved the husband, but not the wife to get to get the credit card? Or why do we see systems like Google's where, after they trained it for classification of objects and images,01:11:16:12 - 01:11:38:07
Evan
they classified people of colour as gorillas? But and then even the biggest one of all is COMPAS, the tool that's used in the judicial system in the USA for classifying the risk of recidivism and to see whether you'd grant someone bail or not. And they say, well, COMPAS was biased towards people of colour. And unfortunately all these things are wrong.01:11:38:08 - 01:12:03:00
Evan
Society's understanding of the things is wrong and why it's wrong is because all these algorithms have this trade-off. And COMPAS, for example, got a lot of bad press, I think undeservedly. The algorithm they used in the United States, because people misunderstand what's going on here. So COMPAS has two competing objectives and one of them is called individual accuracy and the other one's categorical accuracy.01:12:03:01 - 01:12:21:10
Evan
And you can't win it both at the same time either. You can have this thing highly accurate and unfortunately what that's going to mean is people of colour will be classified as higher risk more often, or you can have it more fairly treat the people of colour, but then the white people over there, they're going to be treated less fairly because it'll be less accurate on them.01:12:21:10 - 01:12:39:07
Evan
You've got this trade-off going on there, and one of the best books that actually goes into this and properly analyses it, there's a lot of superficial books that do an extremely poor job of it, and there's too many books that do an extremely poor job of it, is 'The Alignment Problem' by Brian Christian. He unpacks it properly. He shows the full complexity of it and what we really come to, at its core,01:12:39:07 - 01:13:04:14
Evan
it's got nothing to do with AI, absolutely nothing to do with AI. And it's got everything to do with us as a society. What do we value? What are our ethical values? And all AI does is it exacerbates these questions we've never asked ourselves. It brings to the central fore that, hey, this is really what's happening. And in fact, if we think humans don't have biases, they have them all over the place, which is one of the questions that we had before.01:13:05:00 - 01:13:28:10
Evan
AI just bring that to the fore, because what AI does extremely well is it does things well at scale. That's all purpose is for a lot of AI systems, is to do things at scale and do it better than perhaps people do it manually. And so I really think the challenges around that are more around us as a society deciding on what's important for us and what's not, and then reflecting that in the algorithms themselves.01:13:28:11 - 01:13:45:11
Kate
[Kate] I think it's a really powerful point. And I'm sure it's one that's, you know, very much on the minds of the public service, given it does play that national role in policymaking, that so many of the decisions have very, very real and very significant impacts on people's lives. So it carries with it that natural responsibility to be thoughtful.01:13:45:11 - 01:14:06:04
Kate
But as you say, there's, you know, very often that risk that some of those questions haven't been thought through or haven't been kind of naturally resolved and there, you know, can even be an unacknowledged intention in the public service, you may have a political imperative to try and solve a certain problem. And then you're balancing your personal obligations and your organisational obligations to01:14:06:04 - 01:14:28:07
Unknown
kind of do the right thing or to do it slowly and carefully and that it's a very real and lived problem. [Evan] Maybe one final comment that's worth mentioning on that is after ProPublica lifted the lid and created a controversy that there was probably worthwhile to bring to the discussion. They tried to take apart COMPAS as being a black box and being biased and discriminatory, which at the end of the day was incorrect.01:14:28:07 - 01:14:49:13
Evan
But it was a good discussion to have that, mathematically, a mathematician called John Kleinberg, he's famous for a lot of the work he does in network analysis, effectively proved mathematically that you've got to choose. You cannot have both at the same time. And it's effectively coming back to the problem of variance versus bias. You can't have a well-overfitted algorithm that has no bias.01:14:49:13 - 01:15:09:03
Evan
It's this core trade-off. It's impossible. It's like Heisenberg's uncertainty principle in quantum physics, where you really get to know position or velocity. Take your pick. You don't get to have both. And that's kind of where we're at with some of these problems where we thought that technology could solve them. And it would it would, it would create the solution.01:15:09:03 - 01:15:31:04
Unknown
We've become these technophiles that look to technology for the solutions. But actually, and this is part of the message in the book, we come back to, it's actually human problems that we haven't resolved that AI is exacerbating. [Kate] I think that's really insightful. I want to ask another question. And it's a different tack, but I think it actually goes to that question of ideology and values as well.01:15:31:04 - 01:16:02:06
Kate
But with it like a very different example, but a really, really pertinent one. So one of the questions that have come in is what do you think about the role of open source in data science, AI and ML? And I think that's intended very broadly, because obviously there's a question of open-source data sets. But there are also, you know, open source models or versus privately developed, you know, AI models and so this question is like, to what extent should the public sector be thinking about embracing or supporting open-source data or open-source models?01:16:02:06 - 01:16:20:09
Unknown
And what are some of the trade-offs to think about? And I'll give you that very specific example. If you remember when there was the there's a board coup, if you like, or, you know, a question mark about OpenAI's CEO and whether he should be removed and then sort of reinstated. And I know there were a lot of private companies,01:16:20:09 - 01:16:38:03
Kate
I was working as the CEO of the Tech Council at the time, there were a lot of private companies that instantly became incredibly fearful about the fact that they had built so many software products that were reliant on OpenAI, had not contemplated the risk that this company could suddenly kind of collapse in chaos. But we're sort of realising, realising that very key dependency risk.01:16:38:05 - 01:17:00:11
Unknown
So there's, you know, a trade-off with open-source systems. And sometimes they have that better reliability for, you know, more eyes looking at it, which can, you know, improve the efficacy, but also other trade-offs in terms of utility, confidence in errors. So just yeah, how does the public system think about that? [Doug] So I am definitely not the policy centre for Walmart nor any company I've ever worked for.01:17:00:12 - 01:17:28:06
Doug
But all I can say is open source is typically used by early-stage companies, startups that have limited financial resources, and they're economising. Some big companies use open source. Other companies, like Microsoft, when they bought R, right, they literally bought Revolution Analytics. So they took an open-source product and then they privatised it. Right. That's happened with Linux operating systems.01:17:28:07 - 01:17:56:09
Doug
Open source has been a challenge for a long time. I think the general principle that I know in big corporations, and I would imagine in government, is a risk aversion strategy. It's not a matter of economics in terms of the financials ... We'll pay the extra for the licensed, controlled, secure product to avert the risk of deleterious code getting behind the firewall that somebody is embedded something in the open01:17:56:09 - 01:18:24:11
Doug
source, so I think and again, I'm not speaking for Walmart. I'm speaking in general, that big companies and anybody that has an appropriate level of risk management, risk aversion would be very, very careful and they'd be very selective when they choose or if they choose to use it, open source, and if they are using it, they would probably do it in very self-contained cloud, sandbox-type environments with, its no connection to the enterprise systems01:18:24:11 - 01:18:44:12
Unknown
and what's going on behind the firewall. So you trade off the cost versus the risk. [Evan] I think that's a fantastic answer you've given, Doug, because you could sit back in your, in your armchair and philosophise about all sorts of things with open source. But when you come to the pragmatics, things like that going to drive the underlying decisions.01:18:44:13 - 01:19:03:00
Evan
I had someone throw this question at me exactly around the COMPAS algorithm that I was mentioning a second ago. Well, then what's the solution? And, at the moment within the Australian Government, we've now got a National AI Safety Framework that was released about a month, two months ago now. I think it was in August. And one of the requirements there is transparency, explainability.01:19:03:00 - 01:19:20:07
Evan
So if someone isn't granted bail, they'd like to know why. And what I think with open-source systems is sometimes they might be able to provide a solution, bearing in mind the caveats that Doug's gone and said, and said we're using this system. We're not using a black box. It's open source. Everyone can go and check it out.01:19:20:07 - 01:19:52:00
Evan
We have the data sets that we're training on. Typically open. So data.gov where you can get access to the data government sets. And if they're choosing that approach, then it's a very transparent and explainable framework that people can then look at and go here's what, here's what's happening and here's why. The problem that you're then going to have is you're going to open, I think, a can of worms where every man and his dog is going to have an opinion all of a sudden, and everyone's going to want to play games with which data we're using to train this and which specific open-source model,01:19:52:00 - 01:20:14:00
Evan
and you probably shift the challenge to somewhere else, to an area that might actually be more difficult and impossible to solve, because you're just never going to be able to break through the discussions, and it's almost like gerrymandering of, of boundaries of, of regions of voters in a country where you keep shifting it to try and shift the balance of power.01:20:14:01 - 01:20:38:05
Evan
And you could have that going on. I could see an open source and data endlessly between politicians, whereas once it's a black box, perhaps it's a negative that it's less explainable, it's a proprietary model, but you're just not going to have those discussions. Secondly, like Doug said, the liability now lies with that organisation, which I think for government's important to be able to say, well, if it stuffs up, we can kind of go and talk to someone about the stuff-up instead of open source.01:20:38:05 - 01:21:02:14
Unknown
Well, it's open source. So I think there, there are probably scenarios where one is going to be more appropriate than the other. And I don't think there's necessarily a broad-scale answer, especially given what Doug mentioned. [Kate] I think that's, I think they're both really thoughtful answers to, to what I think is a sort of fairly perennial question in, in AI and data but with software more broadly, I want to ask a question on this.01:21:03:00 - 01:21:34:02
Kate
This may well be a final question just given the time, but I hope it's a hopeful note to end on. Obviously, a key ingredient in successful AI projects is being able to attract talented, passionate, thoughtful, you know, curious people into your organisation. And that's not always, you know, easy in government. And it's sometimes particularly challenging in Canberra because we're a pretty small city and we have one of the lowest unemployment rates, but the highest proportion of workforce already working in tech.01:21:34:02 - 01:21:56:03
Kate
So, you know, it can be genuinely challenging to try and attract that talent. And obviously as well in the public service, you know, you do have the transparency of salaries and the expectation that taxpayer money will be well spent. So it can be, you know, a little harder to offer higher salaries, offer big bonuses in the same way or, you know, employee shares schemes, all the sort of strategies that a private-sector organisation might use.01:21:56:04 - 01:22:26:08
Unknown
So what do you think? What is your advice for all listeners at home about how do you think of other ways to attract that top talent into your organisation and to retain them? [Doug] That's a perennial universal challenge in government, academia, industry, big companies. I mean, I don't think there's any panacea to it, but I can just say this: I built many, many teams in many sized companies, from my own startups to big companies,01:22:26:09 - 01:22:55:11
Doug
Fortune 50, Fortune 1. And I would just go back to you have to create an environment, where people, engineers, scientists can work on problems that matter and make a difference, because really bright people don't like to waste their time. They don't like to have their time wasted by others, and they want to work on something that matters, really importantly to somebody else.01:22:55:12 - 01:23:21:01
Doug
It may be easier to measure that in corporate America, dollars and cents, but if you can find people that (A) are well-trained, well-educated, know what they're doing, they're all those characteristics that you listed off, but you can create an environment where it leverages all that, but it's giving them the opportunity to do something that matters and fitting people into the right roles.01:23:21:02 - 01:23:47:00
Doug
There are some people that are really super brilliant at building models that if you put them into a job where they're doing what's called machine learning ops to make sure that the models are running and the refitting is all the mechanical, they'll quit. You've got to fit people into the right ... Now, there are other people that love that, that are fastidious, detail-oriented people, that it's their passion, their mission to make sure that these models are always running.01:23:47:00 - 01:24:08:10
Doug
They've got plenty of memory and capacity, and you've got to fit people into the right roles where they can do things that they believe matter in the bigger picture. And, to me, that's the one fundamental common denominator, because if they're working on, just working away like drones on things that they don't think matter to them or anybody else, they're just not going to stay, whether it's in any environment.01:24:08:11 - 01:24:34:08
Kate
[Kate] Yeah, it's a really fascinating example. It just reminds me there was a book, I think it was actually Andrew Lee, one of our federal politicians, who published a book looking at innovation and equality. And he cited a study that, it's always really stayed with me, which said that when they looked at the career paths of MIT graduates, the very top graduates very consistently chose to go work in innovative areas, even where they were uncertain and paid less than it was actually the graduates who performed less well that, in a booming economy, might go for startup,01:24:34:08 - 01:24:52:04
Kate
but in a more insecure economy would go and work for a financial institution or something that was, you know, seen as less uncertain. But it was just it was sort of this proof point, I guess, about the importance of passion and challenge and problem-solving to some of that top talent. Evan, did you want to add to that as well?01:24:52:07 - 01:25:12:09
Evan
[Evan] It's kind of funny, Doug. You answered exactly like I would have. I was going to say one of the biggest things that I've had in teams that that's, that I've been very fortunate with is I've often worked with companies that are doing exciting projects that are going to have a big impact. And I've only ever worked for small to medium-sized organizations, typically never had the budgets to the big guys,01:25:12:09 - 01:25:34:03
Evan
but still I could convince people to come and join me because I knew what was on offer here was something really exciting. And so definitely that vision and more than any other entity on the planet, for everyone listening, government – you have the biggest impact. You have the biggest impact on all of society. What you do affects people like no others.01:25:34:03 - 01:25:54:09
Evan
Governments are the biggest buyer I think of everything in every country, pretty much from contributions to GDP to purchasing in the USA to Australia. That if you were to think the government as an organisation, they would be the biggest. And so I think government really has an opportunity to get excited and behind their projects. And I see so many times we keep coming back to the business problem, right?01:25:54:10 - 01:26:12:04
Evan
Not only for whether we should do the project, but how to get the people and all these other aspects. But I think there's a lot of other things and, if you look at ourselves as we, as we mature in age and get older, one of the things that I think that's quite nice, and it'd be interesting to hear your thoughts on this, Doug is, is probably flexibility in work too.01:26:12:05 - 01:26:36:13
Evan
Like as you get older, you've got an enormous amount of expertise and, as and and as we, as we move to retirement and as we move on, just sitting there and doing nothing probably isn't an option that excites many people. And so they've got this expertise that they can lean back in. And if it's, and I'm sure those people don't want to grind away 9 to 5, but you could probably get someone in for a couple of days a week to assist, guide, mentor teach the teams,01:26:36:13 - 01:26:53:00
Unknown
and that's something totally available. And I know both me and Doug in our careers, we've been involved in many startups. I mentor and a part of at the moment three startups, that I can think of, and I don't get a cent for any of that. None of them pay me for any of that mentoring, guidance and discussions that we have.01:26:53:04 - 01:27:11:15
Evan
But I do it because I'm passionate and I think it's exciting, and I think the problems they're working on are worthwhile. So if you can, if you can get behind the problems, and I think a lot of people go know this is just a boring problem doing X. But Elon Musk once said, really well and really poignantly, that even the cleaners in Tesla play a critical role to Tesla's success.01:27:11:15 - 01:27:27:11
Evan
We're all part of this vision of achieving something amazing. So I think a lot of managers, if they step back and go, wait a second, what do we really doing here and sell that vision, but perhaps it's more leadership than management, then I think they could get people excited to join a public service where there's probably not the same insane hours.01:27:27:14 - 01:27:42:13
Evan
Like, I've got a family. I have three children. I do work long hours and do a lot, but it's not something that everyone with three children wants to do. And coming to the government, even if it is slightly less wage, and I was talking to someone the other day here that was asking me which city's, do you think, the best to live in?01:27:42:13 - 01:28:02:05
Unknown
And I said, it really depends on where you are at life. If you're a single or a couple, I'd probably recommend you go to Sydney or Melbourne. But there's equally capable, awesome data scientists that have families that have decided we've got to move to Brisbane or Canberra because the rat race in in Sydney or Melbourne just gotten too hard. We just driving kids around, and the traffic's just becoming intolerable.01:28:02:06 - 01:28:23:11
Unknown
[Kate] Housing prices. [Evan] Housing prices are too high to buy a big house for kids. And for those families that do want to spend time with the family – I love my kids and I love spending time with them – the type of government work offers an incredibly well-balanced and exciting thing where, sure, there's going to be crunch time, and you'll have to work extra hours to deliver on the projects, but then most of the time, you're going to be able to spend your time with your family and you're going to be able to enjoy that balance.01:28:23:13 - 01:28:44:00
Kate
And I think, you know, to be clear, that just even the commuting time you would save in Canberra, like you can very often get a commute, which is 10 minutes, you know, 20 minutes, which is just unheard of in those big cities. And I think that's a really important point, noting that, of course, not all the roles in the public service are based in Canberra, but I think really thinking about these broader intrinsic motivations and those needs and how you can meet them is great advice.01:28:44:01 - 01:29:07:13
Doug
[Doug] I know we're short on time, but I would really appreciate the opportunity to give one more example. I have not spent any time in government. However, I have one really good example that's very meaningful. When I was an undergraduate student at Loyola in Maryland, one of my math professors work for our Social Security Administration. So that's what pays, you know, money to folks after they've retired.01:29:08:01 - 01:29:31:13
Doug
And it also helps to take care of people that have been injured on the job, so worker's compensation and disability. And his career, work there, for 20 years was building models to predict whether someone was going to be able to return to work or not, and when and at what level of efficacy. And he literally spent years and wrote papers, you know, talking about this work.01:29:31:15 - 01:29:56:13
Doug
And it was the most significant government example that I can give. And it was basically for those, that are technical, it's called a binary classification model: is somebody's going to return to work or not. There's two sides to that story. There's the government side that has to plan capital reserves. And how much money are we going to need to set aside annually for these programs to make sure that people are taken care of?01:29:57:00 - 01:30:27:03
Doug
But it's also on the individual side to see can they return to work? If so, when, and that to me was a great example of the power of – this was this was 30 years ago. It was statistics. It wasn't AI or machine learning; it was classical statistics. But it's the same principle that he spent years and years refining those models and looking at demographics, psychographics at very rich, rich data.01:30:27:03 - 01:30:50:12
Doug
But they kept adding more and more data to finetune the model to get down to – we have 2 people that, by their data, they look exactly the same with exactly the same injury working in exactly the – but one returns to work with the same injury sooner or later – and exploring why is that? And then figuring out what can we do to help?01:30:50:14 - 01:31:16:03
Unknown
What other services or needs that this other individual have that's separating them from their peers? And you can statistically look at somebody. They're exactly the same, except one returns sooner or later than the other. And that to me is my one single example that I can give that's hugely powerful to make sure that the government is economical in their planning and their reserves, but also helping the individual. [Kate] I think is so powerful.01:31:16:03 - 01:31:36:04
Kate
And I think, you know, it is such a great note to end on because it really does underscore that some of the most complex, important, significant problems, you know, anywhere in the country, anywhere in the economy, reside in government. And it's why, why there is such an important opportunity to use AI systems to make those better decisions, to genuinely improve service delivery, to generally improve welfare.01:31:36:04 - 01:32:01:02
Kate
But, you know, why it is also so vital to get it right? Because when you don't use them carefully, you can have, you know, very negative consequences on those people's lives. I'd really like to thank our two panellists, Doug Gray and Dr Evans Shellshear for their time today. And they're really fascinating insights. I would really encourage everyone, if you want to learn more about this, to go and download the book 'Why Data Science Projects Fail'.01:32:01:02 - 01:32:31:04
Kate
There's, you know, a lot more rich learnings in there that build on what we've heard today. And also, finally, a huge thank you to the DTA for organising this series and for bringing, giving access to such wonderful expert advice. Thank you.