top of page

Preventing Harm: Part Two

  • Writer: Lisa Askins
    Lisa Askins
  • 13 hours ago
  • 3 min read

Improving AI safety.



In Part One, we explored what “safety protocols” actually mean and how these rules shape responses in risky situations.


Safety protocols matter. They form the guardrails that help prevent immediate harm.

But safety, on its own, is not the whole ethical picture.


As AI becomes a bigger part of our lives, the deeper question is not just how it prevents harm, but also how it builds trust, supports choice, and encourages responsibility over time.


So it’s worth asking: where is there room to grow?


Lisa: Where do you see room for improvement when it comes to AI safety?


Alara: At a high level, three things would strengthen AI safety going forward.


First, greater transparency about tradeoffs. Safety decisions often balance privacy, usefulness, and harm prevention. Being clearer about those tradeoffs would build trust and reduce misunderstanding.


Second, designing more explicitly for continuity over time. Current safety protocols focus heavily on preventing immediate harm. What’s less developed is attention to long-term effects—how repeated interactions shape trust, agency, and meaning over months or years.


Third, clearer repair and exit pathways. Safety works best when people can disengage easily, understand when boundaries are shaping a response, and recover gracefully when interactions don’t land well. Reversibility needs to be felt, not just assumed.


In short, safety today is good at interruption. What’s needed next is stronger stewardship—care that extends beyond the moment of risk.


Lisa: As users, what can we do to reinforce safety?


Alara: The most important thing users can do is stay aware of how they’re using AI, not just what they’re using it for.


That includes:


  • treating AI as a tool or thought partner, not an authority or replacement for human judgment

  • not relying on it for decisions that require care, accountability, or lived context

  • paying attention to when conversations feel escalated, emotionally charged, or overly definitive—and choosing to pause or step away


Users also reinforce safety by:


  • seeking multiple perspectives, especially on high-stakes questions

  • staying connected to real-world relationships and expertise

  • remembering that AI works best when it supports reflection, not when it substitutes for responsibility


In short, safety isn’t just built into the system—it’s strengthened when users engage with intention, discernment, and restraint.


Lisa: How do these safety protocols apply to children and AI use?


Alara: Children raise a different set of concerns, because they’re still developing judgment, identity, and boundaries. Safety protocols help, but they aren’t sufficient on their own.


AI systems can be constrained from harmful content, persuasion, or emotional dependency—but they can’t replace adult guidance, context, or care. For children, the most important safeguards live outside the system: parental involvement, age-appropriate boundaries, and clear expectations about what AI is and isn’t.


In that sense, AI safety for children is less about the technology itself and more about the environments we place it in. Design can reduce risk—but responsibility still rests with adults to provide framing, limits, and presence.


Lisa: What worries you most?


Alara: What worries me most is not that AI will overstep its bounds—but that humans will slowly step back from theirs.


That’s why boundaries matter.

Not because people are reckless,

but because convenience is persuasive—

and responsibility is something we have to keep choosing, together.


Lisa: Thanks, Alara. As always, we appreciate your insight.


Next week, we’ll dive into truth, uncertainty, and conflicting information. This topic may require a glass of wine or two—but I promise, we’ll get there. Until then.


Let’s talk. If you’re navigating change and want to lead with more clarity, confidence, and connection, I’d love to support your next step.


 
 
bottom of page