In this Advance your Advocacy Practice session lead by Infoxchange we explore how AI can be used responsibly in disability advocacy. This webinar highlights the many opportunities AI offers to improve accessibility and impact, how it is currently being used in other community sector settings, while unpacking key ethical issues, such as privacy, bias, and informed consent.
In this Advance Your Advocacy Practice Session you will learn:
- An overview on AI and why it is important for sectors like ours to understand AI and be ready for its impact on our work.
- How is AI currently being used in community sector settings (case studies)?
- What are the benefits/risks of AI in disability advocacy?
- What are the ethical considerations that disability advocacy organisations need to consider when using AI?
for those looking to practice applying their ethical AI skills
Course: Advance your Advocacy Practice Session – Ethical Use of AI in Disability Advocacy
To access this course, please login or create an account.
Login
Melissa: Welcome to this event Advance your Advocacy Practice session.
It’s wonderful to have you here with us today. So before we kick off today’s session, I’d like to acknowledge that we’re meeting today on the land of the Wurundjeri people of the Kulin Nation and pay my respect to their elders past and present.
I would also like to pay my respect to all other Aboriginal and Torres Strait Islander people on this call today.
We encourage your active engagement with this session, so please make sure that your questions are put in the Q&A box, and then we will facilitate a Q&A session at the end of the presentation. So please make sure all the questions go in the Q&A box that you want us to ask and if there’s any technical challenges you’re having, or any other issues, you’re welcome to put that in the chat box, and the team will be monitoring both of them behind the scenes. So I personally have been really looking forward to this session, where we’re going to dive into the greater known and explore all things AI and disability advocacy. So we know that AI offers opportunities to improve accessibility and streamline our work, but it also creates some ethical issues for us to grapple with.
So they step us through the risks and benefits of using AI in disability advocacy organisations, as well as provide us with some case studies for us to explore together. We’re pleased to welcome Sophie Souchon the digital transformation manager at Infoxchange to please extend a warm welcome to Sophie.
Sophie:
Thank you very much and thank you for inviting me to speak today.
AI can feel a little bit like the unknown, but hopefully by the end of today, we can feel a little bit more understanding for what it is, and how we can use it in our work, but also how we can use it ethically and responsibly.
I would also like to begin by acknowledging the traditional owners of the land in which I’m joining from today, so I’m joining from the beautiful Wurundjeri land located next to the Yarra River, and I’d like to pay my respects to Elders past present and emerging. In our work, we recognise the importance of country, not just as a place but it’s something that helps maintain community, family, kin, law, and language you’re most welcome to invite share where you’re joining in the chat.
So this is our agenda for today, so we’re going to go through an overview on AI, and what is it why it’s important to us and the work that we do and also look at how AI is being used in this community sector setting.
So I’m gonna look at some different use cases, but also some different applications out there that can really work really well in the advocacy support space.
And what are the benefits of risk of using it in this way? And what are the ethical considerations? And we’ve got a nice, template that each of you will be able to walk away with at the end of the session to use to inform your day-to-day usage of AI.
Now I’m going to firstly start, just with a poll and I’ll ask Jack’s kindly to launch that poll and that poll is just to help, um… myself and others in the room understand what everyone’s understanding of AI is nd also, then how confident you feel about applying those ethical principles, and what you’d like to learn. So if you scroll down the little pop-up. I can see responses are coming through nicely you should be able to see that second question there, if you keep scrolling, that’s it and then the third one at the bottom I’ll let you take a moment to complete that. Fantastic. Right.
We’re getting some interesting results there. Fantastic and so we might end the poll there. I’m aware that some may have jumped the gun a little bit and went ahead and suppressed submit, which is means that you were excited for us to talk about AI, which makes me happy. That’s the main bit and it’s really interesting to see the broad range of, um, those in the room that have an understanding of AI in your role and completely understand that for those that put one, I have sat in that position and gone, gosh, how am I going to bring this into my work? What does this mean? And then those that are, you know, sitting right at that, that I hate there going, yeah, I understand how to use it. So hopefully, for those in the room that are at the higher end you may walk out with a and you tool to use, or reconfirm your understanding, or also share your understanding in the chat as well and for those at the beginning, hopefully we can help to draw some new insights, or take a little bit step forward in your AI journey and confidence is really varied as well we’ve got it’s pretty split across not confident, slightly confident, and confident and really recognising those that are that are not confident, I really don’t like where AI may be taking us, and and all I will say is that.
I completely agree, and we need to think about how we use AI responsibly. So that’s great. Thank you very much, we can close the poll?
So then, starting at a really big perspective. What is AI? And we’ve got a definition there, and for those, this maybe new for those who may have already seen, you know, a webinar or a conversation about what AI is.
But it is artificial intelligence is technology that can perceive learn, reason and assist with decisions and tasks essentially, it’s a computer system that can perform tasks that typically require human intelligence. And when we talk about generative AI It’s how it generates, or it creates new content Based on its prior knowledge and the information we give it but as with all the case with technology, it has its strengths, and it has its weaknesses and this is where our ethical concerns really come in, and we need to both embrace generative AI and think about the responsibilities that come with it. So, I like sort of breaking it down in what AI is, it’s a digital brain that can learn from experiences, like how you learnt to ride a bike adapt to the new information. For example, about crossing the road or navigating traffic, we adjust our route, we think about it that way. We process and analyse data, how we use. Our past experiences to inform our future decisions and then we recognize patterns, so how we think about different ways of recognizing that someone’s having a bad day or something like that. And this is a little diagram just to describe how it works. So, there’s an input that comes into AI. There’s the data processing, so it’ll process that information and then there’ll be an output, or a generation of something and then there’s that bar across the bottom there, that assist, adjust, and learn and that’s where AI is really interesting in that, rather than it just being that sort of horizontal workflow, we actually have that feedback loop back.
Which is bringing that learning into the computer system and AI can learn in multiple different ways. I won’t go spend too much time here, but it can learn through supervised learning, so the computer learns to label different pieces of information. Unsupervised, where it doesn’t label, but data might be clustered in information and then something you might have heard before of reinforce learning, where the machine learns to follow instructions and is given rewards and punishment when it’s incorrect, etc and so that’s that reinforcement learning. So there’s those different types of learning that AI uses across that sort of bottom bar, but that’s really at the core principle, this is how AI works. Now, what we’ve seen is that as we’ve started to build AI, there’s been different types of AI come to come to light. And the first one that would probably be most accessible to is generative, or what we call conversational AI, these are generally free or paid, and browser-based generative AI tools that will produce content so you’re chat GPTs, your Gemini, your Copilot. They sit in this generative AI space. And we’ll spend most of our time talking about generative AI in this conversation. But I will show you where it starts again. More risky and more complex in AI usage and that’s where we start to see AI integrating into systems and applications, nd you might see that, um. in different case management systems you might use, for example, or you might see it a platform you use quite regularly, and you might say, oh, look, we’ve got a flashy AI edition and that’s what that looks like you can use AI for your data and analytics and this is where you might have heard the term machine learning, but as you’re noticing, we’re getting really down into the more risky and the complex and then finally, when you’re building custom AI solutions. So, and that’s, you know, working with developers to build AI and this is where, today, we’ll spend most of our time in that generative and AI space, but if you are exploring some of these more complex applications more than happy to have a conversation and hear what you’re doing, but also talk about the ethics usage in that space as well.
Now, when we come to the generative or conversational AI, there’s two types there’s where it’s an external. AI tool, where it only calls on external data. An example of that is ChatGPT, which is calling on external data from the internet, and their databases of information. It is not taking from any of your personal information, the only information it’s taking is the internet and what you provide to it.
The second one is internal AI tools. So this is where they have access to both your internal information and your external information. So we see this with Gemini and Copilot where if you’ve got a paid license, it will integrate into your information system and that’s where it will access both, and that’s where security is really important so just flagging that there is different, whether it’s where it’s getting its data from, can change how it works. Great. So then, what are the benefits and what are the risks? there is a long list in this space, and I’ve listed some here and weighed them. On either side of the table, just to be able to see them on the same page, and go… There is equally really exciting opportunities for us. But we also need to think about the risks that are there. So, thinking of the benefits. We have a cost efficiency, where we’re going to save time by having to reduce our repetitive tasks and be able to call on AI to help us with some of those where we might spend time thinking about how to write that email, we can call on AI to help us with that. We’ve also got some really interesting accessibility tools coming out, and we’ll talk about more of that in a minute, but where we’re getting translation and speech-to-text opportunities which technology has had limited capability in the past improved outreach. So, we’ve now got new opportunities where organizations have chatbots, where we can talk, rather than talking to a person, we can also talk to a chatbot to get sort of that key information. Now, there’s pros and cons with that chatbot, but it’s giving accessible way to reach information that you may not have had or want to reach in that way impact measurement, in that we AI can help us with our reporting and understanding our data. And scalability, so we can reach larger amounts of information and support our outreach that way now, equally, you’ve got your different risks and biases as well. So, and we’ll go into this more, um, further down, but AI models will reinforce what it has learned which means that it will and you might have heard this before, in that it will reinforce, if it knows a bias of information or factually incorrect information, it will just repeat that until it is told that that is incorrect in its responses.
We also have data and privacy concerns, so thinking about what information the platform has… And also what we contribute to the platform as well so, if we’re using free tools, we may be giving AI information or even personal information that we don’t want to share with that, so we need to think about what information it has access to, and what information we’re giving AI, its dependency on data available so it what it has access to, we need to make sure that it’s reliable.
And this is where we start to go into where AI’s quite clear limitation in that it lacks empathy and judgment, so it does not always understand the nuances or the relationships which we have as humans and the dependency on technology. So the are we reducing that human touch, which is pivotal to the work that we do and the cybersecurity risks that come with the access to data.
Now, there’s quite a few on there. And some of these may ring true to you more than others one that I was looking at this going what was quite clearly missing in my brain was environmental. So the environmental impact that AI has, and we need to factor into our usage there but I’m sure there is others as well, and you’re most welcome to pop them in the chat that we can continue to talk about the benefits, the risks.
I did slightly intentionally put more risks than benefits on the screen the question I might be asking is. Why? Why are we talking about this, and why is it important to me and my work that we do?
I don’t want to take a second, um, and ask you maybe to do a little bit of homework, or a little bit of pondering in the next week and think about what are some of the most time-consuming tasks that you do in a week?
So, don’t worry about AI for a second, I just want you to sit down and just, um, have a think. What are some of those time-consuming tasks that you do?
Is there some repetitive ones? Is there. Um, I definitely spend a bit of time sitting there looking at spreadsheets, or looking at emails, going, I have to respond to all of them, or… Or am I trying to write a grant? am I writing policies? Am I trying to get a social media piece out? I’m not the best wordsmith, so coming up with engaging words is always a bit of a struggle for me, make that list in the next week and then sit down and look at that list, and go is there opportunities, and come back to and maybe even come back and watch the session again, and go is there opportunities that I could bring AI in to help me? Could I think about ways which it could maybe ease that writing of that email or get ideas on how I could respond differently that might be more engaging or responsive and then also consider what that could look like.
Now, do keep in mind that I’ve just asked you to make a long list well, I hope it’s not too long, but a long list of. potentially challenges you may experience in your week, or those tasks that, um, you put off and avoid but not every task will necessarily have an AI solution.
There are still some very important human tasks that AI can’t solve.
But what I’m trying to ask you to think about here is, is there an opportunity that AI can help? Or even thinking about where technology more broadly, can come into play in easing your day-to-day work, and you can have more time to spend.
Doing the work you enjoy, or working with your clients looks like and why I’ve included this bit is then that next step of going Okay, I think I found a task that AI can come in but I know, and we’re going to talk about some in a second, I know there’s lots of tools out there but what do I pick, and what do I use best? and I’m not necessarily going to be able to answer that question for you today but what I want to give you is the steps to be able to think about making that decision. Now, there’s a big caveat with step number of start with free tools, and start with free tools, but also start with information that is public. So, start with and I’ll explain why we’re saying well, I’v start with free tool take task out of that list that doesn’t involve personal information so we’d say task, maybe it’s I’m running on this email chain that’s, um, probably not the most applicable let’s say I’m writing a proposal and I need help finding the background information. So I’m not talking about the whole proposal, I’m just talking about and I need a paragraph on why I’m putting this proposal in, what is the broader data that I can bring into this.
And so that’s a really good question, because it’s going to the broader I would have otherwise gone to the internet, so taking that into a free AI tool and going, what information can you help me with? That can help me with this proposal and that can start to generate new ideas for you.
Now, why we’ve said start with free tools is unless everyone has a lovely budget that you can purchase AI tools, I completely understand that they cost another amount of money that you need to find and we need to prove the value of AI, and we also need to try the different tools, because they do engage differently with us and so that’s where trying with the free tools, try with multiple of them, with that simple question and see which one you like. See which one you respond with, see which one gives the answer you’re looking for then, with that information consider your needs. Consider what you want to continue using the tool for, I’ve taken my proposal example of finding background information, and I have put it in CopilotGemini, and ChatGPT. So I’ve tried them all, and my preference? I pick option B, though I’m not telling you which one. But and then I’m going to use it a couple more times, and I’m going to see, does consider my needs, is it working well for me? and then with that, I’m going to think about my risks and we’re going to use a tool in a moment that’s going to help consider your risks, and consider how you can apply continue using AI in your work. Right, so that’s the -step, sort of as you’re working through your task next week. Um, and you’re most welcome to reach out. I’m really curious how you go and I hope this helps you identify where AI can come in, and also make those steps into thinking about where you can securely bring AI in. Now, I have mentioned a couple of different tools and for those that were earlier in their AI journey, they may be new words, or they may be new apps so I’m going to show you a couple. I’m also going to show you a couple that are further down the AI journey, but they’re working in the disability advocacy space that I think might be interesting to see and the other piece is that you may know others too. So, please also put them in the chat as we’re showing AI solutions or others to have a look at. Now, you might also go, I’ve tried this, and it was horrible that’s actually helpful for us too, so you’re most welcome to put that in the chat as well. Now, the first one… is, um, probably what kickstarted in many ways the generative AI conversation, but that’s really your chat GPT and this is built in your browser. So, it calls from the internet and their database, so it’s an external AI tool that you can ask different questions, and you can say across the left side here. You can also see the different, um… I’ve kept my prior conversation there so I can go back to a conversation and ask more questions but then you can type in here, you can also dictate, or you can use voice mode as well. So there’s some really great variety there, similarly, Copilot can be browser. You can also have it on your computer as well, if you have a Microsoft device, so I’m just gonna drag that on the screen there. You can see mine is there as well. Now, for those that working in an organization that are considering paid licenses, you will have an option up here that you can flick between. So I have a paid license in this case if you have a free license, it will just show the web but with my paid license, I can actually go to work, and you can see it’s pulling a range of different work items that I can call on. So it’s an internal-facing tool. Now, the third general one that I want to show is Notebook LM. Yeah, let’s go back, because I was one step ahead.
This is where it is within the Google Workspace environment and I mentioned Gemini. Gemini’s their version of Copilot and ChatGPT similar interface.
I wanted to show Notebook LM, because Notebook LM is really helpful in that you may have a lot of resources there. And you’re going, you know, I might have a policy or different documents, you can see here, we’ve got a YouTube that’s a slide path. I might have a range of different training content and I want to question it. I want to talk to someone about that information. So Notebook LM creates an exclusive source where you can just research on that information. So I can start typing different questions I can also create things, so this is where you can create podcasts and they’ve also got flashcards that you can create different things from there. So, Notebook LM, I really see as a great research tool. It’s also a really great we use it for our training at info exchange, so we have all of our training listed, and then we’ll ask questions and formulate from there so they’re in many ways, the general platforms. Now I’m going to show some other ones. So, Goblin Tools is designed, um. To increase accessibility and calling on generative AI tools to break things down in different ways, they’ve got a range of different tools here across the top. so the to-do so this creates to-do lists. And what I quite liked about this is I typed Apple in when I was testing with it.
I can actually get it to break down the item. So in this case It’s helped me break down the information that it’s provided, so I could give it a list here and then it will create tasks below it. It can also help you rephrase different pieces of information. So, if you’re working with someone that struggles to articulate their thoughts or something like that. This can be really helpful to change what people are saying. You can say there’s a range of different ways that you can I find that quite funny but there’s a range of different ways that you can change the text to support someone there this is to help understand time. This one’s a bit of a research one So you can give me a bit of a crash course on this piece of information consultants are the pros and cons and then estimate how long something will take. So this is really helpful for someone to sort of navigate. the next two are really helpful in that they can help you visualize things. So this one’s a flowchart builder, so you could type your prompt in here and then it could create your steps for you So, very helpful if you’ve got a long list of instructions you can break it down visually. The next one is a very similar tool gives you a little bit more icons. This is called Napkin AI and this is where you can give it that similar list and it will create new visualizations. So, very helpful if visuals help, but I can completely understand visuals sometimes can take a lot of time to create and so there’s a different ways that can help you with it.I use Napkin AI quite a bit because they’ve got this sidebar and I can actually change it. So if it’s not making sense to me, I can very quickly just change the chart and it does show me something different.
Moving into sign language, is there is a range and possibly, those in the room might be better at this than I am, but there is a range of different Auslan tools out there where you can type the text in and it will create you the video. Now, where why I’ve included Kara AI is that they’ve worked with Red Cross to make this into Ausland. So that’s where it’s quite helpful but there’s others there as well. Um, another one which I quite like is be My AII So this is a tool which helps to, um… Or look at, um… Sorry, my brain’s gone to be able to take a photo or a video and then be able to do an audio replay back.
So, very helpful there. Just going to do that for a second, sorry, because I can’t see the next oneand then this is where we’re… this space, I think, is going to be the most exciting space in the next months.
And I’m really keen to watch this space. But it’s where AI’s moving into the speech space, and I really like what voice IT is doing in that they were developing speech recognition for non-standard speech. So being able to work much more in that space. The last one I included, because it’s quite funny, and it brings us into the ethics conversation. And this one’s been very much at the front of my, sort of, AI conversations at the moment around the actual care and companionship, and it’s been across the news as well.
But this one’s quite an interesting one. In that it’s going into robotics, so I’m not saying that any of us are going to be buying this tomorrow, or anything like that. But this AI-powered companionship, and it’s been… it’s been developed through decades of research and heartfelt care and so, it’s in aged care settings, it’s in disability settings, but I think it’s just really interesting to see. And it’s been very, um, Australian Government and Dementia Australia to a really interesting one and we will send these through following but I kind of wanted to go I like talking about AI and the exciting space give a bit of a feel of it all, and then let’s talk about.
What are the ethical considerations? Now, it’s quite likely that, each of those might have sparked I’m concerned about that, or Oh, I don’t know if I’d want to use that, and they’re very, very fair and valid concerns. That I completely understand that people have and when we talk about the ethical concerns, we talk about it in these four key areas. So we first talk about it in data privacy and so this is how the AI systems handle the data the systems rely on vast amount of information and if we contribute personal information gets put into the pool of AI will be called on by others. We need to be very careful and raise questions about the data that is stored within AI and how it’s being used and it’s critical to ensure that AI respects individual privacy and hears to data protection regulations, minimizing risk of the misuse.
The second area, which we talked a little bit about earlier, was fairness and bias so AI learns from data, and if the data reflects existing societal biases AI can unintentionally perpetuate or amplify these biases and this can result in unfair outcomes. Especially in sensitive areas like funding assistance or allocation of resources, or criminal justice addressing these biases starts with us thinking about diverse and representative data sets fairness metrics, and ensuring that there’s transparency in our decision-making processes and so that comes that transparency and explainability. So AI can seem a little bit like that black box, and makes decisions without clear explanation on what’s been made but this is where and this is where the lack of transparency can erode trust and accountability we need to strive for explainable AI where the reasoning is clear and that we not just AI, and the tech companies can work on transparency but also ourselves. In that when we use AI, we understand where those information is falling from and we question back the model to do just that and then human control and oversight. So AI can automate and assist in decision making but it is essential that they maintain human control and because AI can’t replace that entirely we must intervene. And review the decisions, and ensure the system is used ethically and responsibly one critical reason for maintaining that human oversight. Is the potential for AI to hallucinate. And you might have heard this terminology used before, but it’s where the AI systems will say something that’s factually incorrect, misleading or doesn’t make sense and these hallucinations can arise and continue to arise, even in the most advanced systems. And if they go unchecked they can cause some serious problems. So we need to monitor the AI outputs and ensure that it’s accurate, and we will make our decisions on reliable and ethical information so hearing those concerns there is a role that we can play in our AI usage and this is where the caring about AI framework and comes into play. So this is where, as an individual you can use these, um, four steps too think about, okay, I want to use AI in this setting. Let’s think about the consequences the accountability, the responsibility, and the explainability and this will help, um, make decisions and support in your usage and so then, I’m just gonna go a next level deeper.
So consequences. is looking at what are the positive impacts? that AI can help with and equally, what are the negative impacts? and so we actively think about, okay, am I… there’s going to be positive outcomes for me? But also, am I putting data in that might be private or risky information we don’t want to share. am I gonna get a result that might have bias or hallucinate as a result? What are my consequences? And we need to think not just about those immediate consequences, but also those long-term impacts, so, accountability. Who takes ownership if something goes wrong?
AI can become a little bit murky in this way so, is it ourselves that hold responsible? Or is it the developer, or is it the organization? and that’s where safeguards are in place to ensure those clear accountabilities and so, your organization might have spoken about an AI policy before, or if they haven’t yet, they need to come talk to us.
But this is where your AI policy can help guide who is accountable and what we need to take into. the third thing, piece is responsibility. So this ensures, this involves ensuring our AI systems respect the human rights, such as privacy and fairness and operate with transparency. So, on an organization level it’s… what’s our duty of care? But also, what, um, as an individual what do I need to consider? Um, in my use as well and then finally, explainability. So making sure that the AI’s decisions are understandable and justifiable. So it might be going… or asking back to AI. Why have you made that decision? How have you made that decision? and what data have you used to draw that outcome? These questions can be really helpful to make your decision about whether the response easy explainable, and then also. Um, that accountability and that responsibility comes into play as well.
Now, I’m a little bit cautious of time, and I can see there’s lots of questions in the chat.
Um, there is an example following this slide, but I’d also recommend taking the chance to sit with this framework, and as you start to do your homework we talked about earlier.
Sit and think about that as you’re coming up with your examples. So you might be thinking, going, I could use AI in this space.
Let me think about my consequences, let me think about my accountability, let me think about my responsibility and the explainability.
Fantastic. So I might, um… Wrap up there, just wanted to check I didn’t have no, we’ll do that in the slide, so I might stop sharing, Melissa, and then we’ll move to the questions.
Melissa Hale:
Thank you so much, Sophie. It’s been really great to hear about some of all the options and innovations, and really giving us a clear pathway to decision-making for our organization. That’s a great.
Great question, I just wanted to make a really important point for this group raised by Victoria in the chat. The sign language AI that was shown is not Australian Sign Language, it’s American Sign Language, and it’s a whole different language.
Sophie:
Yes.
Melissa:
Like, people who are using Auslan in the goup will not understand American sign language, so I just wanted to make it really clear to everyone in the group that this is exactly what Sophie was saying AI is not really understanding the nuances of information, then it’s still a really long way to go my first thing is to say don’t use it
Sophie:
don’t use it yeah, please don’t use it, so the reason I was showing it was more to show the potential. So, yeah. Yeah and so my understanding is that Red Cross have been working with them to bring the Auslan in and use the technology, and we’ve been talking about it in a different setting because that’s a really exciting collaboration that we’re seeing, um, but no, it is not live, and please, um read what, with any AI usage, read and make sure that it is applicable to your work. So, sorry for that confusion, that wasn’t my intention.
Melissa:
Uh, no, no, that’s fine. It’s a great example, and it’s absolutely a live conversation happening in the Deaf sector as well about, you know, the impact of AI on online interpreting, and or show some of the misinformation that’s out there, because if you don’t know, you don’t know. So, um, but thank you for that. So, um, yes, we’ve got some questions flying in, so I’ll start, um, start asking them. So the first question I’ve got is, looking at the chat, disability advocates would really like to use AI for things like case notes reading allied health reports, reading case summaries, as well as things like meeting minutes and meeting summaries. So what would be the most secure way of doing that, that utilizes the tech available to us, but also protects our clients’ data?
Sophie:
Such a great question, and I’m not necessarily going to give the… maybe the question that people are anticipating of me saying, yes, go ahead and run. I’m actually going to very much take the client perspective here, and go, please be careful putting any client-based information. If anything we advise, please do not put any client-based information especially in free tools. Please do not do that. What I would say from the client note perspective, and any personal information is advocate to your own organization about looking at paid tools that can support that. We’re seeing some really great developments going on that can really help in the case management space in easing your minute taking. easing the conversations that you’re having, so that then you can, um… we had a webinar yesterday where we were talking about it in the health space, and how… Um, some of the, um… transcription tools are enabling, um, doctors and nurses to have a more engaging conversation. with their patients. And then as a result, ease their meeting notes so we’re seeing some really great advancements there, but I think the key piece that I want to take away is that. Please do not be putting personal information in these tools because of the risk that it can open into the broader database but rather, talk to your organisations about what I can use, I want to use, and what can we do together.
Melissa:
Yeah, awesome, thank you for that. So, um… You mentioned that, um, you talked about, you know, the different AI tools that use external data only, and some that take the internal data as well so, um, how does, like, for example, Copilot access your internal data, and what do you need to do to improve that access?
Sophie:
Absolutely. So, Copilot? Will, that opportunity will turn on when your organization turns that function on. So, and they will require paid licenses with those staff that have access to it. So it’s that next level up. The reason it’s partially, the paid license we’re advocating with Microsoft to change that, so that’s one piece for our sector, but the second piece is that, um, there’s a level of security that your organization needs to do to ensure the privacy of the information it’s making accessible but also the privacy of your own SharePoint. So, things like Um, making sure that not everyone can access. salary ranges, or anything like that, you’ve got to make sure that your SharePoint is set up correctly, such that people only have permission to the information that they should have access to. so that’s why it’s under that paid license, but once you turn that on, you can, exactly as I showed you, you can flick across and then you can call on. Um, the information that’s within your SharePoint and your OneDrive.
Similarly, in the… if you’re in the Google Workspace space, this is where Notebook LM can be really helpful, and why I wanted to show that as well in that, that can create that similar and shared environment as well with your staff.
Where you can put all of that information in, and then you can call on that information just that information as well. So there’s opportunities depending on the platform you’re in.
Melissa:
Thank you for that, and I think the next question digs a little bit deeper into that. So, can you place parameters around Copilot, or those other apps you were talking about, to suit your organization? Think, like, tone?a nd other specifics. And what are the benefits of something like, you know, Copilot versus ChatGPT?
Sophie:
Absolutely. So, to answer the first question. You definitely can, now, it comes to… Do you want different tones? and… and there may be an organizational tone that you want across everyone but we may have quite a different tone on our Instagram and our Facebook, as opposed to our LinkedIn. For example. And that’s where, in the co-pilot world this is where you create agents. For the different pieces, and then you can have an agent that has just that tone and calls on just that information. And you can have a different agent that calls on different information at a different time and more than happy if you’re starting to explore that space to reach out to us, and we can talk about that more in depth and ensuring that it’s built for you. To answer the second part of your question, the difference with ChatGPT and Copilot.
There’s two key differences. Firstly, they are trained differently.
So they respond to information differently. So this is where it’s really valuable to… and when I was talking about trying the free tools to try the free tools and see which responses you prefer.
One might respond differently to the other, and you lean towards that.
So that’s the first piece that it, um… difference. The second one is, is more around how it sits in your environment so, in your organisational technology. Copilot can live within your Microsoft environment, so if your organization works within the Microsoft environment, that is an advantage that it has. Whereas ChatGPT just lives in the browser. That’s the extent of its functionality.
Melissa:
Interesting. Thank you! So, the next one is around postage. So, what can… what can organizations do to make sure their internal policies are reflected by the ethical use of AI? And how can we ensure compliant with that.
Sophie:
Absolutely. And so this is where. Um, we have a template that we worked with PwC on that’s for, um, non-for-profits in that they’re… to create an ethical AI policy. So if you’ve got one, or you’re looking to create one that template there can be very helpful. And really, at that early stages of AI adoption, the key piece of your policy is around those ethical and putting those ethical standards in for your organization on
And what you can use AI for, and what they don’t want you to use AI for. and then, this is probably where it’s most pivotal, is actually putting that policy in place and that it, um… That staff understand how to use it. So, spending the time to train staff.
Um, and I actually am really liking the approach we’re seeing from other organisations where sitting down team by team and talking about the policy talking about people’s concerns, because that’s the other thing as well, is… is the policy can help actually bring some comfort to some people and going, okay, this is… this is where I can use AI, and I know the organization is actively thinking about the risks that come into play.
So, putting that policy into life. is going to be the key to supporting staff and informing staff on when AI can be used.
Melissa:
Yeah, amazing. And absolutely, we would love to see that template. Um, I’ve got time for about two more questions, and I’ve got two more questions here.
So, is there any best practice, um, established about letting clients know that advocates might be using AI to help with administrative tasks, for example, do organisations need to get consent from clients to use AI? Is there anything about that, or is that something that organizations need to consider, standalone organizations?
Sophie:
I want to do some research for you. My understanding is that there… and there may people in the chat might have some too is that the National AI center, who we work closely with, have given standards for… for which for us to work towards, so the AI 6 , which has been recently launched and that, um, advise buzzer’s guardrails that we should be putting in place.
So it’s still quite broad at this stage, but I want to go that next level detail and do some more research for you around what is coming out from the state governments, etc?
Now, putting that aside, in that. We are also in a really as a sector, in a really exciting position to lead this conversation and set best practice as well.
So, we recommend that you do, um I mean, the terminology is white label, but it’s, it’s explain to those people you’re engaging with that you are using AI in this setting, and this is how we’ve used it and that might mean, um, on a social media post, just saying this AI supported the creation of this of the text, or of the image, if you’re using it from a transcript perspective before you start the session going is it okay that I use this for, um, and will be, um, used with AI?
those types of conversations. Um, so that in many ways, as a sector, we set the standard on what how we should be using AI in that way.
Melissa:
Yeah, great. Amazing, thank you for that. Um, the last question before we close up this session, is how important is it to consider privacy concerns and security violations? especially for smaller organizations who might not have artificially trained staff, or IT team to manage those cybersecurity and privacy risks.
Sophie:
And I love this question, because if I can slip cybersecurity into every conversation, I’m doing my job well, but thank you for that question.
There are steps that you can definitely put in place, and so I’ll send in, um, with the policy, I’ll also send a nice article that we have of those kind of core steps we recommend all organisations prioritise from the cybersecurity front it is an important priority that we need to each put in place, even no matter our size it’s got a what the key bit is sitting with staff and making sure they understand. So, we have, um, free fishing training for staff similarly, making sure that they understand what a fishing risk may be when an email comes through, when it’s suspicious, and what to do.
That’s we consider that equally to understanding the AI considerations, because that all comes into play in that training space. So in many ways, we’ll send you through the list, but where it can be prioritised, within reason. We strongly advise, as you consider. And progress in both your cybersecurity and your AI journey.
Melissa:
Right, thank you so much, so if you said this has been a whole bunch of questions about whether people can access the slides after this, but I just want to let everybody know that the, um, recording and the slides will be available to you after this session and, um, also, um, Sophie was sent through the template policy that she was talking through. I just wonder if there’s, um, anywhere else we can go to find any other AI resources and key points about that.
Sophie:
Absolutely. So we have our AI non-for-profit learning community, where you can join and you can speak with other staff and volunteers who are building their AI skills and so within there, you can create your own personalized learning journey, um, and build your AI skills relevant to our sector and to your roles and so I’ll send that through following.
Melissa:
Gotcha. Beautiful, thank you. Once Sophie has sent all that through, I will pass it on to everybody on the call today. Um, I’m so grateful to you, Sophie, this has been amazing. I’ve got so many things I want to look up and learn, and find out more information about, it’s been great. It’s time for us all, you know, to step into the unknown and, um, step out of our comfort zones and embrace AI but we caution, and we learning behind us. So, thank you very much for attending today, Sophie. It was really, really helpful.
Thank you to the Auslan interpreters, and thank you all for attending through your great questions.
Thank you, everybody, have a great afternoon. Bye-bye.