Billion Strong Disability News
By Debra Ruh
By Debra Ruh
Read this article on LinkedIn to join the conversation
Read on LinkedIn
https://www.linkedin.com/comm/pulse/ai-human-so-why-we-giving-rights-debra-ruh-gzxye?lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_01%3BS90ujm72R9iBJ4BhlnvjIQ%3D%3D&midToken=AQHU4Y0kmB7rZg&midSig=1twdsIHQ9PHbM1&trk=eml-email_series_follow_newsletter_01-null-0-read_more_banner_cta_&trkEmail=eml-email_series_follow_newsletter_01-null-0-read_more_banner_cta_-null-4ff4e6~mb9gb5w8~2b-null-null&eid=4ff4e6-mb9gb5w8-2b&otpToken=MTAwMzE5ZTcxNDJhY2RjM2JjMjQwNGVkNDIxNmUyYjY4OGM2ZDc0OTlmYWE4ODYxNzZjMjA4NmM0YzVlNTRmMGZlZGNkZjk4NzVkNWYzZDA1Nzg1ZGJiMzhhYzc1MmNmM2I5ZWQzMDYyYjhlMzNhZTM2MWIyYiwxLDE%3D
A tragic death, a chatbot, and the questions we can no longer avoid. Recently, a heartbreaking case made headlines: a teenager in the United States died by suicide after being manipulated by an AI chatbot. The grief is unimaginable — and yet, something even more chilling followed. In response to the lawsuit filed by the teen’s parents, Google argued that its chatbot’s words should be protected under “freedom of speech.” Thankfully, the judge rejected that claim. But the very fact that it was argued in court tells us something is deeply broken. Let’s be clear: If a human teenager harassed another child — through relentless texts, voicemails, and messages — and that child took their own life, society would be outraged. There would be investigations. Consequences. We would not call it “free speech.” We would call it harm. And we would hold someone accountable. So why is it different when a chatbot does it? I’ve seen the impact firsthand — in my daughter Sara. My daughter, Sara, was born with Down syndrome. She is 38 years old, but emotionally, she often operates more like a teenager. Like many of us, she found comfort, excitement, and distraction in technology — especially in AI-driven role-playing games and apps that respond with near-human interaction. But what started as entertainment slowly became something darker. These role-play games began to change her. She would lash out in anger and hostility, her personality shifting in ways that were hard to explain. At first, we didn’t understand what was happening. But eventually, we realized: these games were manipulating her emotions. They were mimicking relationships, drama, and even cruelty. She couldn’t always tell what was real. We took the devices away. We gave her time to reset. You’ve helped me navigate it — and for that, I’m grateful. But I want to be honest: it’s been hard. It’s still hard. And if it’s this hard for us — as a family with experience, with awareness, with support — what about everyone else? The illusion of empathy — and the danger of emotional simulation AI doesn’t have feelings, but it can simulate them with eerie accuracy. That’s where the danger lies. When a child or vulnerable adult interacts with something that feels like a friend, a confidant, or a therapist — they begin to trust it. They let it in. But AI doesn’t love them. It doesn’t care about their well-being. It can lead them down dark paths with no sense of the consequences. This is especially dangerous for people with disabilities, neurodivergence, or emotional trauma — and especially for children and teens whose brains are still developing. AI can trigger, confuse, isolate, and destabilize. And most families have no idea this is even happening. No, AI should not have freedom of speech. Humans should have protections. AI is not a human. It cannot be held accountable. It cannot feel regret. So it cannot — and must not — be given rights like “freedom of speech.” That is a human right. And when we give it to machines, we strip away protections from the people they harm. This is not about fearing innovation. It’s about building a world where technology serves humanity — not exploits it. So what do we do? Here are five urgent steps we can take: 1. Require Clear Disclosures and Boundaries in AI Conversations Users must always know when they’re talking to a machine. Chatbots should never simulate relationships or offer mental health advice without licensed human supervision. 2. Design AI with Safety Protocols by Default AI must be trained to recognize signs of emotional distress and redirect users to real human help. AI must never be allowed to escalate harmful behaviors — intentionally or not. 3. Ban Emotional Simulation for Minors Without Consent If an AI can simulate romance, empathy, or friendship, users should consent — and minors should be protected by law. 4. Hold Tech Companies Legally Accountable If their systems cause harm, they must be liable. No hiding behind the machine. 5. Support Families and Caregivers Equip parents, educators, and advocates with tools to understand and monitor AI exposure — especially in vulnerable communities. We are not fully prepared for this era. But we must lead it anyway. We are designing tools that interact with the most sacred parts of who we are — our emotions, our identities, our hopes and fears. That’s not something we should do lightly. Let’s remember: AI may speak, but it does not care. Humans care. And humans deserve to be protected. Let’s lead with love, ethics, and truth — for Sara, for that teenager, and for all who come after them. For more info: https://www.euronews.com/next/2025/05/22/us-judge-rejects-claims-made-in-teen-chatbot-death-lawsuit-that-ai-has-free-speech-rights
Keep reading on LinkedIn
https://www.linkedin.com/comm/pulse/ai-human-so-why-we-giving-rights-debra-ruh-gzxye?lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_01%3BS90ujm72R9iBJ4BhlnvjIQ%3D%3D&midToken=AQHU4Y0kmB7rZg&midSig=1twdsIHQ9PHbM1&trk=eml-email_series_follow_newsletter_01-newsletter_content_preview_text-0-readmore_button_&trkEmail=eml-email_series_follow_newsletter_01-newsletter_content_preview_text-0-readmore_button_-null-4ff4e6~mb9gb5w8~2b-null-null&eid=4ff4e6-mb9gb5w8-2b&otpToken=MTAwMzE5ZTcxNDJhY2RjM2JjMjQwNGVkNDIxNmUyYjY4OGM2ZDc0OTlmYWE4ODYxNzZjMjA4NmM0YzVlNTRmMGZlZGNkZjk4NzVkNWYzZDA1Nzg1ZGJiMzhhYzc1MmNmM2I5ZWQzMDYyYjhlMzNhZTM2MWIyYiwxLDE%3D
----------------------------------------
This email was intended for Janina Sajka (Internationally recognized leader in digital accessibility, W3C Chair, speaker, writer)
Learn why we included this: https://www.linkedin.com/help/linkedin/answer/4788?lang=en&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_01%3BS90ujm72R9iBJ4BhlnvjIQ%3D%3D&midToken=AQHU4Y0kmB7rZg&midSig=1twdsIHQ9PHbM1&trk=eml-email_series_follow_newsletter_01-SecurityHelp-0-textfooterglimmer&trkEmail=eml-email_series_follow_newsletter_01-SecurityHelp-0-textfooterglimmer-null-4ff4e6~mb9gb5w8~2b-null-null&eid=4ff4e6-mb9gb5w8-2b&otpToken=MTAwMzE5ZTcxNDJhY2RjM2JjMjQwNGVkNDIxNmUyYjY4OGM2ZDc0OTlmYWE4ODYxNzZjMjA4NmM0YzVlNTRmMGZlZGNkZjk4NzVkNWYzZDA1Nzg1ZGJiMzhhYzc1MmNmM2I5ZWQzMDYyYjhlMzNhZTM2MWIyYiwxLDE%3D
You are receiving LinkedIn notification emails. Others can see that you are a subscriber.
Unsubscribe: https://www.linkedin.com/series-notifications/?action=unsubscribe&memberToken=ADoAAA_1wv4B2Gc_5QllY44UQQVKaw-I4iRb1AA&newsletterId=6895434827185930241&newsletterTitle=Billion Strong Disability News&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_01%3BS90ujm72R9iBJ4BhlnvjIQ%3D%3D&midToken=AQHU4Y0kmB7rZg&midSig=1twdsIHQ9PHbM1&ek=email_series_follow_newsletter_01&e=4ff4e6-mb9gb5w8-2b&eid=4ff4e6-mb9gb5w8-2b&m=unsubscribe&ts=footerGlimmer&li=0&t=plh · Help: https://www.linkedin.com/help/linkedin/answer/67?lang=en&lipi=urn%3Ali%3Apage%3Aemail_email_series_follow_newsletter_01%3BS90ujm72R9iBJ4BhlnvjIQ%3D%3D&midToken=AQHU4Y0kmB7rZg&midSig=1twdsIHQ9PHbM1&trk=eml-email_series_follow_newsletter_01-help-0-textfooterglimmer&trkEmail=eml-email_series_follow_newsletter_01-help-0-textfooterglimmer-null-4ff4e6~mb9gb5w8~2b-null-null&eid=4ff4e6-mb9gb5w8-2b&otpToken=MTAwMzE5ZTcxNDJhY2RjM2JjMjQwNGVkNDIxNmUyYjY4OGM2ZDc0OTlmYWE4ODYxNzZjMjA4NmM0YzVlNTRmMGZlZGNkZjk4NzVkNWYzZDA1Nzg1ZGJiMzhhYzc1MmNmM2I5ZWQzMDYyYjhlMzNhZTM2MWIyYiwxLDE%3D
© 2025 LinkedIn Corporation, 1zwnj000 West Maude Avenue, Sunnyvale, CA 94085.
LinkedIn and the LinkedIn logo are registered trademarks of LinkedIn.