I AM
Nicole Perry
Writing about mental health from a feminist counselling perspective
As technology continues to evolve, finding ways to integrate it into our therapeutic practices can be both exciting and beneficial. With the launch of ChatGPT-4 earlier this year, we have even more opportunities to enhance therapy with AI. At the same time, it's crucial to be mindful of how we're integrating this technology, just as we would with any other new tool or practice. I hope this article can be a useful guide for clinicians and clients who might be considering, or already using, AI tools in their work together. With any free or new technology, I have learned to ask the question “what’s the cost?”. Often with tech, it’s our data. In the case of ChatGPT and other AI, I was dismayed to learn that the answer to “what’s the cost” is also the environment. For those who haven’t heard, training and utilizing AI uses a shocking amount of energy, leading to increased electricity usage and carbon emissions: it's estimated that "the carbon footprint of training a single big language model is equal to around 300,000 kg of carbon dioxide emissions. This is of the order of 125 round-trip flights between New York and Beijing". It also uses an excessive amount of water: while "a single Google search requires half a millilitre of water in energy, ChatGPT consumes 500 millilitres of water for every five to 50 prompts". There are even more environmental and human costs, and companies are not being fully transparent about them. I don’t know what the answer is here, but I think it’s important to start with awareness of the costs and benefits of using certain technologies, especially for the population groups that are likely to suffer most from climate change. I want to be clear that I’m not here to say whether an individual person should use AI or not - especially because I have seen some incredibly positive and helpful uses despite being wary about the costs. Instead, I thought it would be helpful to add to what I hope will be an ongoing conversation about the shifts we’re making as a society, and if they are truly in line with the future we’re trying to co-create. What is the most ethical answer? The thing is, personally, I’m still torn. But in my work as a therapist, I’m familiar with having two important values that come into conflict and having to navigate this. What’s good for the environment may come into conflict with what’s helpful for someone’s mental health in many ways. For example, I have often encouraged clients to get their groceries delivered, order take out, or do whatever is necessary to make sure that food actually goes into their bodies, even when it’s not the most financially or environmentally sustainable option. I have also encouraged people to take a cab or separate car instead of carpool, when doing so would mean they can actually attend a social function and not worry about their safety or ability to leave when their nervous system said it was time. I have also supported people to use AI to maintain their ability to keep working in the job they’re in, or to help them attain new work that doesn't burn them out. These are small examples of how everyday we make choices where we have to compromise on our values, and decide out of all the things that are important to us, what’s most important. This has to be an evolving and conscientious process. I have seen often that people get stuck when we create rigid rules around what is considered “ethical” and what is not, and then place those rules onto other individual people who are struggling. What I’ve also come to recognize is that putting the burden of the environment onto the shoulders of individuals is unfair when it is primarily large corporations that are causing the current climate crisis, and it also confounds the real problem of our capitalist system. This burden of ethical consumerism also disproportionately impacts the people who need those easier, cheaper options (e.g., disabled folks, people living below the poverty line, people with limited time and energy). Important considerations of using AI in therapy All that preamble aside, incorporating AI into our therapeutic practices requires thoughtful consideration. Here are some key questions and points to keep in mind:
Additional ethical considerations
Possible positive uses of AI in therapy Here are some positive ways in which both clinicians and clients can potentially take advantage of AI: Positive uses for clinicians
Positive Uses for Clients
Final thoughts As we've seen, AI isn't an evil but it's also not an ultimate solution. It's a human-made tool with potential benefits as well as concrete negative impacts we're only now beginning to uncover. I would highly recommend this briefing paper of CPA on AI and psychology as further reading for psychologists, as well as this article on the negative impacts of using AI in science research, to keep you thinking about this topic. In addition, this video by theoretical physicist Angela Collier is a good, entertaining, and easily digestible primer into what AI is and what it can't do.
How about you? I would love to hear if you're using AI tools in your practice, what you're using them for, and how you're talking with clients about it. .
2 Comments
|
AuthorNicole Perry is a Registered Psychologist and writer with a private practice in Edmonton. Her approach is collaborative and feminist at its heart. She specializes in healing trauma, building shame resilience, and setting boundaries. About the Blog
This space will provide information, stories, and answers to big questions about some of my favorite topics - boundaries, burnout, trauma, self compassion, and shame resilience - all from a feminist counselling perspective. It's also a space I'm exploring and refining new ideas.
Archives
August 2024
Categories
All
|
Online Portal for Clients
Once we are working together, please use the Owl Practice Client Portal to
|
|