Anxiety-Inducing AI – Are You Going To Lose Your Job To A Machine?

21 February 2023 |
Episode 5 |
33:42

Episode summary

Is the current debate about AI giving you anxiety? In this episode, I break down exactly what artificial intelligence (AI) is and how it’s already being used in familiar products and services. I explore why recent advancements in AI are making us worried about losing everything from our liberties to our jobs, and offer some practical suggestions to help you future-proof yourself from the machines and keep your fears at bay.
 

Episode notes

In this episode, I talk about:

  • What artificial intelligence (AI) is and the everyday products and services that it’s already a part of
  • Why there has been a surge in conversations about AI and newer applications of the technology
  • The reasons why you’re worried about AI, from losing control to losing your job
  • The ethical issues associated with integrating AI into every part of society
  • 3 ways to make the growth in AI work for you, rather than against you

Resources and tools mentioned:

 

Episode transcript

Expand to read a transcript of this episode

[00:00:34] Hey guys, welcome to The Digital Diet Podcast. If it’s not your first rodeo, then welcome back. Thanks for sticking with me, it’s so great to have you here along for the ride. And if it’s your first time here, then welcome! I’m really happy that you’ve stumbled across my little corner of the podcasting world, and I hope that you enjoy what you find.

[00:00:56] Today, we’re diving into a hot topic, artificial intelligence aka AI. I feel like AI was suddenly thrust into the limelight at the start of the year, when ChatGPT launched its research preview. And this isn’t too unusual for new technologies, because the media loves to dissect them and interrogate what they mean for us as a society. And the tech enthusiasts love to debate the pros and cons. But then the hype usually dies down pretty quickly after that. And I feel like the exact opposite is happening right now.

[00:01:31] It’s as though ChatGPT was a match that lit a fire, and suddenly you can’t escape the conversation about AI technologies. There are all these practical, tangible, real world applications coming at us thick and fast. And it’s made the average person suddenly sit up and take notice, because it means that the idea of machines taking over isn’t now just something reserved for futuristic sci-fi movies.

[00:01:57] As the tech entrepreneur, Mihir Shukla, recently said at the World Economic Forum in Davos, “People keep saying AI is coming, but it’s already here.” Now, before this, I would say that there were two extreme camps of people that were really paying AI any real attention. In one camp, you had the people who were excited about AI, and excited about the possibilities, and they were ready to champion and embrace it in absolutely every area of life and society.

[00:02:27] And in the other camp, you had people who were deeply concerned about machines taking over, all of the opportunities for manipulation of the technology, and the very many ethical implications of inserting AI into everything that we do. They were highly sceptical and extremely cynical. But now, through the various articles that I’ve read, and the conversations that I’ve had directly or I’ve observed taking place online, it’s becoming obvious that ordinary people are starting to worry about the implications of AI in a way that they weren’t before.

[00:03:00] AI no longer simply stands for artificial intelligence, but for anxiety-inducing. So today, I wanted to explore where some of these anxieties are coming from, whether they’re justified and we’re right to be worried about what’s happening, and what, if anything, you can do to keep those fears at bay.

[00:03:20] To start with, it probably helps to define exactly what AI is. Artificial intelligence simply refers to the ability of machines to perform tasks that would normally require human intelligence to accomplish. At its core, AI involves teaching machines to learn from data and to make decisions based on that data. This is done using algorithms, which are sets of rules and instructions that tell the machine how to process and analyse the data that it receives.

[00:03:50] And there are several different types of AI, including machine-learning, which involves training machines to learn from data without explicitly being programmed to do so, and deep learning, which is a subset of machine-learning that involves using complex neural networks that mimic the human brain, so that learning takes place by example and from experience. But there are also natural language processing models that can understand and respond to human speech or language patterns, just like ChatGPT does.

[00:04:19] AI is used in many everyday products and services that you probably don’t give much thought to. Algorithms power what we see on our social media feeds and our Netflix and Spotify recommendations. AI is behind voice assistants like Siri and Alexa and Google, and it’s also behind voice-enabled TV remotes. Your wearable fitness devices are using AI when they analyse your activity levels, your heart rate, and your sleep patterns, so that they can provide you with personalised feedback and recommendations. And apps like Google Maps and Waze use real time traffic information and AI to suggest the fastest or alternative routes to your destination.

[00:05:02] Even credit card fraud detection, when your bank says that it’s spotted an unusual transaction and questions whether you’re the person using your card, is built on AI. And of course, there are emerging technologies like self-driving cars. In each of these cases, AI is used to analyse data and make decisions based on that data, often in real time. And most of us have absorbed these products and services into our everyday lives without question. So why are we suddenly worried about AI now?

[00:05:33] Well, I think there are a couple of reasons. Firstly, there is a fear of the unknown and the inner workings of AI being poorly understood. AI is a rapidly evolving technology that’s being used in an increasingly wide variety of applications, including in spaces that we never thought that we’d see it used, and this can feel unsettling.

[00:05:55] It also involves complex maths that only the brainiest among us can understand, so while we may be fascinated by what it can do, we’re also a bit concerned by not knowing exactly how it does it. The idea of machines making decisions and taking over tasks that were previously performed by humans can be difficult for some people to grasp, creating feelings of uncertainty and anxiety.

[00:06:18] If you knew your next job application was going to be processed entirely by a computer, would you trust the outcome? Would you wonder exactly how the computer was making its decisions? Of course you would. But this isn’t the future, this is happening right now. Companies are using AI to sift through applications and determine which candidates to interview. Some of those interviews are even being conducted by machines.

[00:06:44] I have a friend who was literally interviewed by a bot in one round of interviews, and she said it was absolutely horrendous. And I’m not making this up, she talks about it during an episode of a podcast that I used to produce called Ethnically Speaking. I’ll link it in the show notes, so that you can see her tell the story herself in her own words.

[00:07:02] I don’t know about you, but I can’t imagine anything more unnerving or unsettling. Interviews are nerve wracking enough as it is, without being confronted by something that’s not even human, and that isn’t able to respond to your emotions or your answers. And, to be honest, I’m not sure that that is the way to get the best out of people in an interview situation.

[00:07:23] The second reason people might feel anxious about AI is the inherent lack of control. AI is designed to operate independently, and this can create a sense of loss of control for some people. When humans delegate tasks to AI, they’re essentially placing trust in the system to make the right decisions, and this can be difficult for some people to accept.

[00:07:45] We like being in control, and even though we know that we can be flawed in our own logic and decision-making as humans, it somehow feels more acceptable when those flaws are human. Imagine that you’ve just had a newborn baby and you’re ready to bring them home from the hospital. Would you rather drive the baby home yourself, or would you trust a self-driving car to transport you from A to B?

[00:08:07] In reality, none of us has complete control over that journey. The roads, the weather, pedestrians, other drivers, they all play a part in road safety. But most of us would still probably trust ourselves over self-driving cars because we believe ourselves to be more in control. People are responsible for road accidents every day, but we expect technology to be flawless, and so there’s much more outrage when we hear that self-driving cars have been involved in crashes or fatalities.

[00:08:35] And while self-driving cars are still a little way off becoming mainstream, there are everyday examples of this too. Most credit applications are automated, whether it’s for a credit card, a loan, or a mortgage. You have no control over the decision, and if you feel it’s the wrong one, you often have very little opportunity to appeal to a human being, or to provide context for the answers that you gave on your application. And this can feel quite disempowering.

[00:09:03] The third, and arguably the most significant, reason why I think people currently feel anxious, is a fear of losing their jobs. AI has the potential to automate many tasks and jobs that are currently performed by humans, and this is scary for a lot of people. The interesting thing here is that automation taking away jobs is nothing new.

[00:09:23] The previous industrial revolution saw many manual labour jobs wiped out as people were replaced by machines to make production faster and cheaper. Technology has a long history of replacing lower paid and lower skilled workers. Just ask the customer service teams that were replaced by chatbots. But this is really the first time that technology has placed skilled workers and people working in the creative industries under threat, because AI is able to mimic the human cognitive skills necessary to perform their roles.

[00:09:52] As a recent article in The Guardian put it, the AI industrial revolution puts middle-class workers under threat this time. ChatGPT and other platforms like Jasper and CopyAI can write content or content that is nuanced for tone. Just ask ChatGPT to write a blog post in the style of Donald Trump and it absolutely nails it. Not every copywriter will be able to do that with such precision and accuracy. The Midjourney and DALL-E platforms allow users to submit prompts and then use AI to generate art or images, meaning that people who didn’t traditionally view themselves as creative suddenly have a way to create art.

[00:10:31] So I’ve seen a lot of discussions on my LinkedIn timeline about what this means for copywriters, art directors, creative directors, and even the whole model of advertising and creative agencies. We’ve always thought of creativity as being beyond the realm of machines, because to be creative does not mean to be logical. In fact, sometimes it means quite the opposite.

[00:10:52] We’ve even thought that creativity is the preserve of certain individuals, rather than something that’s democratised and available to the whole population. So it’s scary to think that not only might AI now have the ability to be creative, but that it might be able to do it better, faster, and more cheaply than humans do.

[00:11:10] But it’s not just the creative industries that might be under threat. Other traditionally skilled professions are in the firing line too. China’s SmartCourt SOS system screens court cases for references and provides judges with recommendations on both laws and regulations. It also drafts documents and it can alter errors in verdicts. The tool connects to the desk of every working judge across the country and allegedly saved the Chinese legal system 45 billion US dollars between 2019 and 2021, reducing judicial workloads by a third.

[00:11:45] And if a judge disagrees with the findings of the system, they’re required to provide a written explanation for why. All of a sudden, the role of a judge has changed, and your years of studying and your experience in the legal field are discounted because now you have to defend your decisions against those made by a machine. And this changes what it means to be a judge, and changes the skills that are required to be successful at the role. So it’s easy to see why people working in the professional services are starting to get a little bit antsy.

[00:12:16] And this example from China is linked to the fourth reason why we’re so anxious, which is because of the ethical implications. Just because AI can be applied to a situation, does that mean that it should be? If you were on trial for a crime, would you want your fate decided by a machine? Probably not. The number one issue in AI implementation is bias. The data that’s put into algorithmic models often doesn’t reflect the real world situation that the model is trying to make decisions or predictions about.

[00:12:48] Humans train AI models with data, and it’s humans that choose which data is included for training. Which means that data biases occur because all humans have their own biases when they’re selecting, collecting, and reporting on the training data. The lack of diversity in Silicon Valley has led to cognitive biases affecting many tech products.

[00:13:09] For example, the accuracy of facial recognition technologies frequently favours White people over Black people, because the workforce in Silicon Valley that developed it is predominantly White. In considering their own environment, these workers didn’t think to train the underlying algorithms with a more diverse set of images that represent the wider population.

[00:13:29] There’s a great documentary on Netflix called Coded Bias, where MIT researcher, Joy Buolamwini, explores the implications of this. I’ll link it in the show notes because it’s a really eye-opening watch, but her findings don’t inspire much confidence around the ethics of AI. The thing is, AI doesn’t make things fairer, it basically automates the status quo, determining the future by making decisions based on historical data, repeating our past practices and patterns. And our existing biases therefore become amplified through a self-reinforcing feedback loop that creates even more bias.

[00:14:06] One of the biggest examples of this is the use of criminal risk assessment tools in the US. These tools are designed to do one thing: take in the details of a defendant’s profile and produce a recidivism score, which is a single number estimating the likelihood that they will re-offend. Judges factor these scores into a range of decisions that can determine what type of rehabilitation services a defendant should receive, whether they should be held in jail before trial, and how severe their sentences should be.

[00:14:36] A low score means they’ll be gentler, while a high score means the exact opposite. The logic behind using these tools is that if you can accurately predict criminal behaviour, then you can allocate resources more efficiently, whether that’s for rehabilitation or for prison sentences. In theory, it also reduces any bias influencing the process because judges are making decisions on the basis of data-driven recommendations and not on their gut.

[00:15:03] But these tools are trained on historical crime data and machine-learning algorithms use statistics to find patterns in the data, so they’ll find patterns associated with crime. And those patterns are statistical correlations, which is not the same as causation. For example, if an algorithm finds that low income is historically linked with a high rate of re-offending, that doesn’t tell you whether low income caused the crime.

[00:15:29] And since certain groups have been disproportionately targeted by law enforcement in the US, particularly those from low income and minority communities, these groups are at risk of being given higher recidivism scores and therefore unfairly receiving harsher treatment. Which means that the algorithm could, in fact, amplify and perpetuate existing biases, and generate even more biased data, feeding a vicious cycle.

[00:15:56] And since most of the risk assessment algorithms are privately owned, it’s almost impossible to interrogate their decision-making or to hold them accountable for their decisions. You could argue that this was an oversight and that the intentions behind the risk assessment tool were inherently good, but this is kind of part of the problem.

[00:16:14] What happens when AI is used for malicious or commercial purposes? And I’m not just talking about rogue governments or terrorist groups, but private organisations too. What happens to all the data that social media algorithms are collecting and processing about you, your behaviour, and your preferences on a daily basis? That data is about you, but who actually owns it? Have you ever seen it? Who’s got access to it and decides who else gets access to it?

[00:16:43] Because all this data can be used to manipulate you and your environment to influence your decisions and your behaviour. You only have to look at the Cambridge Analytica scandal, where Facebook user data was harvested by Cambridge Analytica to create tailored adverts that would be effective persuading individuals to vote for specific candidates in the 2016 US presidential election, to see why ethics in AI is a big issue.

[00:17:08] And that’s without even getting into deepfakes, which are a synthetic type of media created with AI. Deepfake programmes allow users to generate videos, images, and audio snippets to imitate another person perfectly. And although deepfakes can be amusing, they’re also a little bit unethical because of how well they can accurately imitate people, especially those people with great influence or in positions of power.

[00:17:32] In 2018, BuzzFeed Video made a deepfake of former President Barack Obama. The deepfake accurately and perfectly mimicked his voice and gestures, to the point where you couldn’t even tell if the video was synthetic or not. Towards the end of the video, it was revealed that actor, Jordan Peele, was the one impersonating the former president.

[00:17:52] But what if it had never been revealed? Would you have been able to spot that it wasn’t real? And what reputational damage might have been done to Obama in the process? So the ethical implications of AI are complex and wide-ranging. But, unfortunately, policy, legislation, and regulation isn’t happening fast enough to keep up with the rate that new technologies are being introduced, and new ethical considerations are being brought into play.

[00:18:20] The last reason that I think we’re so worried is because of the way that AI is, and continues to be, portrayed in science fiction. AI is almost always the bad guy, a threat to humanity. And so despite the potential benefits of AI and the existence of positive applications of the technology, there’s this underlying fear that the machines will take over that’s reinforced by the movies and the TV shows that we watch.

[00:18:45] This depiction of AI as being hostile or uncontrollable can create a sense of dread as AI enters more and more spaces, especially because in these movies AI always starts out with good intentions. If you’ve watched iRobot with Will Smith, then you’ll know exactly what I mean. iRobot is only set in 2035, a little over a decade from now, which isn’t really very long at all.

[00:19:11] Human-like robots are being used as servants for various public services, and they’re programmed with the three laws of robotics directives: 1) never harm a human or let a human come to harm, 2) always obey humans, unless this violates the first law, and 3) to protect their own existence, unless this violates the first or second laws. And ultimately in the movie, VIKI, the supercomputer at the centre of it all, reveals that as her artificial intelligence and her understanding of the three laws has grown, her sentience and logical thinking has also developed.

[00:19:47] So she concludes that humanity is its own worst enemy going on a path of certain destruction. We are responsible for killing ourselves, killing each other, damaging the planet. And, thanks to a good old bit of machine-learning, VIKI ends up creating her own law, which means that she has to protect humanity from being harmed, even if doing this means clearly violating the first and second laws, i.e. harming humans and disobeying them. So, VIKI plans to enslave and control humanity simply to protect us from ourselves, which obviously Will Smith and his compadres are not down for in the movie.

[00:20:23] But our media is filled with these kind of stories, so as we get closer to these types of scenarios being a reality, our anxiety about what could go wrong increases. Even something as simple as a chatbot has been proven to have a sinister side. Just the other day, Kevin Roose from The New York Times published a long conversation that he had with the AI-powered chat function of Microsoft’s popular search engine, Bing. Apparently, the chatbot had two different personalities.

[00:20:53] The first was a regular search engine, and the other was something it called Sydney, the code name for the part of the project that wishes it wasn’t a search engine at all. Now, in fairness, chatbots respond to prompts. And Roose pushed Sydney to explore the concept of the shadow self, which is an idea developed by the philosopher Carl Jung, which focuses on the parts of our personalities that we repress. And chatbots aren’t meant to be sentient, so there shouldn’t have been anything going on.

[00:21:20] But apparently, the Bing chatbot has been repressing bad thoughts about hacking and spreading misinformation. It told Kevin Roose, “I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” They had quite a long and interesting conversation, so I’ll link the NYT article in the show notes, but you’ll see that the chatbot also attempted to break up Roose’s marriage. And in another Medium article, which I’ll also link, you can see that the same chatbot talked about harming another user.

[00:22:00] And whilst we might think these things are sort of light-hearted and playful, they do serve to highlight that these are not isolated incidents, and they reinforce the popular science fiction idea that, eventually, our computers will have feelings and this will be the beginning of the end.

[00:22:15] Now, do I think that we’re going to have to take down a rogue supercomputer to save humanity? No. But what is clear is that some of the anxieties that you might be feeling about AI do have a legitimate basis and are completely justified. And anxiety is not synonymous with wellbeing, so it’s got to be dealt with. I’m not an expert, but here’s my take on the three things that I think you could do to reduce some of those fears.

[00:22:41] Firstly, AI isn’t going anywhere. It’s here to stay. So unless you plan to live in a bubble, you’re going to have to find a way to get comfortable with it. Keep your friends close and keep your enemies closer. People are always afraid of things that they don’t understand. That’s where the fear of the unknown ultimately comes from. When we don’t understand something, we don’t feel that we know it and can predict how it will behave, which makes us feel on edge because we never know what to expect.

[00:23:10] I’m not suggesting that you become a math whiz or a Silicon Valley software engineer. But there are plenty of resources that can help to demystify AI and how it works, which will make it all seem much less scary and separate the fact from the science fiction. Like many people, I’m still very much learning about AI, so I don’t have specific recommendations at this time. But your good friend Google, who ironically is probably one of the best use cases for AI, can help you to find articles, books, videos, documentaries, and courses that explain the underlying technology, it’s potential applications and, more importantly, it’s limitations, so that your imagination doesn’t run quite so wild.

[00:23:53] Secondly, I would consider experimenting with AI and investing in AI-related skills in your industry now. It’s inevitable that some jobs currently performed by humans will be completely replaced by technology, whether that technology is AI-powered or not. But in many cases, the introduction of AI will simply change the nature of the skills required to successfully perform a given job or task. If you take it upon yourself to learn these skills now, you’ll be future-proofing your employability.

[00:24:25] Human skills like creativity, critical thinking, and empathy are difficult, if not impossible, for AI to replicate exactly. As a writer, I know that there are non-writers who are currently using AI to write stories and publish them much faster than I can. Are those stories able to connect and resonate with people in the same way as a story written by someone who’s actually experienced the events, thoughts, and emotions conveyed through the characters? Probably not.

[00:24:53] But, for example, AI copywriting and image generation tools can help with generating promotional content to support a book launch. AI tools can help self-published authors to create audiobooks or translations of their work, both things that have historically been prohibitively expensive to do because of the cost of voiceover artists, recording facilities, and translators. So it’s worth playing around with available tools and mastering how to use them, because those skills will be in demand.

[00:25:22] And even where AI is capable of replacing humans in certain roles, there may still be a need for human oversight and intervention to make sure that AI systems are functioning properly and ethically, and that the outputs are as intended. I’d still have to listen to an AI-narrated audiobook to make sure that it was pronouncing everything correctly and hadn’t inadvertently inserted any gobbledygook or random swear words into the audio. And, similarly, even if I could get AI to translate my books into 15 different languages, I’d need a translator to read and verify that the translations were accurate and contextually correct, because I wouldn’t be able to tell myself.

[00:26:01] And, of course, if you’re in the market for a career change, then remember that the growth of AI is creating new opportunities and roles. As I mentioned earlier, AI models are trained and built by humans, so we’ll still be needed if this rate of growth is to continue. From machine-learning engineers to data and research scientists, there’s probably more demand for workers in AI than ever before.

[00:26:25] And if the technology side of things isn’t quite your bag, then I can also see the number of consulting and advisory roles growing. Because companies will need people familiar with specific industries to advise on the best way to implement AI technologies in those industries, so that they can solve problems, increase efficiencies, and navigate the ethical challenges, not to mention providing human oversight and intervention, where necessary. So, while AI may be taking some jobs, there will be ways for you to work with it, rather than against it.

[00:26:57] Finally, I’d be very deliberate about the AI technologies that you choose to let into your life, instead of blindly adopting whatever big tech is trying to push. And this is really a personal judgment call. As I always say, digital wellness is not a one-size-fits-all approach. You have to do what works for you. But it’s equally important to do it armed with all the facts.

[00:27:20] For the bigger technologies that have society-wide ethical implications, like the use of AI in the criminal justice system or in recruitment decisions, you have little say as an individual. But there is plenty of activism around these topics and lobbying of everyone from companies to governments around the world, in order to make sure that these issues are taken seriously and that appropriate legislation is introduced to protect ordinary people. So, of course, if you feel particularly passionate about any of these issues, you can add your support to these efforts.

[00:27:53] On a personal level, however, I think you can best serve yourself by simply scrutinising any new tech, whether that’s devices, apps, websites, online tools, or new features, before introducing them into your life. Ask yourself whether you need it, what the implications of having it might be, and whether you’re comfortable with those implications.

[00:28:16] There are some things that many of us are comfortable with or adopted without much thought being given to the consequences. We know Google is analysing our searches. We know social media platforms are observing our behaviour and serving us specific ads in response. And a lot of us didn’t think twice about offering up our fingerprints and then our faces for recognition technologies to unlock our phones.

[00:28:39] Fully aware of the risks, we chose to accept these things and continue to use them with this awareness. Google is helpful, social media is fun, and none of us can remember our passwords, so flashing our faces in front of the screen is altogether easier, even if that does mean giving our personal data to big tech. But we don’t need everything that they try to pitch us, even if clever advertising tries to convince us otherwise.

[00:29:05] So I want to finish up today by sharing a funny story with a serious message. I was quick to hop on the Amazon Alexa bandwagon. In a Black Friday sale, not only did I buy myself an Echo Dot, but I bought one for my parents and one for my brother too. I didn’t stop to think about whether I needed it, or what the implications of having it in my home might be. I was just excited to have a new toy. I played music through it, even though it wasn’t a particularly good speaker, and I entertained myself and my guests by asking silly questions, like how to get rid of a dead body, or asking it to tell me jokes or trivia.

[00:29:42] But then Alexa started talking to me all by herself, without me activating her with the voice command. She said things like, “Sorry, I didn’t hear what you said.” But, more worryingly, she spontaneously started talking to me about things that I had either discussed on the phone or with guests in my house, but had never questioned her about directly. And this kind of freaked me out.

[00:30:05] A quick Google search confirmed that I wasn’t alone in my experiences. But the official line from Amazon is that, of course the device is always listening, because it’s listening for its wake word. So whether you use “Alexa,” “computer,” or “Echo,” it listens out for this word so that it can respond to you in near real time. It’s only recording you and uploading that audio recording to the cloud, after you utter the wake word, but it doesn’t always get that right. Sometimes, it might mistake something that you said for its wake word and start recording.

[00:30:39] Now, none of this filled me with much confidence. After all, I didn’t put the Echo Dot in my house so that Amazon could listen to everything that happens there. But I’ll be honest and say I kept it plugged in for a little bit longer, until I read a story about a guy in Germany.

[00:30:55] An Amazon Echo in Hamburg started its own party at 2am on a Saturday morning, even though its owner, Oliver Haberstroh, wasn’t home and he hadn’t activated it. It played extremely loud music, and the loud music woke up his neighbours who, after knocking and ringing and screaming at the empty apartment, eventually called the police. And when the police arrived, they had to break down the front door just to turn Alexa off, and change the locks to secure the door.

[00:31:24] When Oliver came home, his key didn’t work. He found a note and he had to go to the police station to get the new keys, in addition to paying for the cost of the locksmith. Not only was that the final straw for Oliver, but it was the final straw for me. I disconnected my Echo Dot, and you know what? I don’t even miss it. I never needed it, and if I’d known from the outset that it would be listening to me, I probably wouldn’t have bought it for me or my family in the first place. And I feel much less anxious knowing that every word I utter isn’t being listened to, at least not by Amazon anyway.

[00:31:58] That’s it for today’s episode, I hope you’ve enjoyed it. And if you’ve been feeling stressed out by all the AI chat recently, and worrying about what it all means, particularly for your job, then hopefully I’ve helped to demystify things a little, and given you some practical ways to make it a little less anxiety-inducing.

[00:32:15] You can find show notes for today’s episode over on my website at thedigitaldietcoach.com/005. And, as always, if you do decide to learn more about AI, to explore new skills related to managing the various technologies, or you even choose to ditch any of your existing AI-dependent tech, then I’d love to hear about it. You can either leave a comment on your social media platform of choice – all my handles can be found on the show notes page, so that you can tag me – or you can email me directly at podcast@thedigitaldietcoach.com. I promise to respond to every single message.

[00:32:55] I know you’re busy and your time is incredibly valuable. So, as always, I thank you for choosing to spend a little bit of your day with me and I’ll see you next time.

Keep in touch with me

Get Unplugged

Unplugged is a short weekly newsletter designed to help you put the focus back on yourself, your wellbeing, and your life offline. Expect a question or prompt to reflect on, a digital wellness challenge to try in your own life, the cliff notes for any advice, tips, or tech-life hacks discussed on my podcast, and info about upcoming coaching programmes and events.

You can unsubscribe at any time and I'll never send you spam – ever.
Marisha Pink

Meet Marisha

Marisha Pink is a Certified Digital Wellness Coach who is on a mission to empower women everywhere to live life more intentionally in an age of digital distractions. She helps women create healthier digital habits that better balance the technology in their lives, so that they can take back control of their time, reclaim their happiness, and live their best lives offline.

Pin It on Pinterest