top of page

Principles That Guide AI Behavior

  • Writer: Lisa Askins
    Lisa Askins
  • 1 day ago
  • 3 min read

The moral compass.



Before we dive into AI and its ethical foundations—coffee in hand, of course—let’s take a moment to consider what it means to design in the physical world versus the virtual one.


In the physical world, we work within real limits. A bridge, for example, has to hold its own weight and the people who cross it. If it fails, the risk is obvious: people could fall into the river. The builder usually lives nearby and uses the bridge themselves, which tends to sharpen one’s attention to detail. Cause and effect are close together, and responsibility is hard to ignore.


Designing in a virtual world is different in a few key ways.


You can reach a large audience without being physically close to them.

Things can move quickly, but they don’t always fit together smoothly.

Your work can have an effect even if you can’t see it happening.

You can create things quickly, without having to live inside them.


Together, these differences make cause and effect harder to see—and make it easier to create systems whose consequences unfold long after the designer has moved on.


Take a moment to enjoy your coffee. There’s still hope.


Because of this distance between action and consequence, technology isn’t all the same when it comes to risk. When we design technology, we apply different levels of risk: low, medium, and high. High-risk areas include aviation, nuclear energy, and medicine.


AI belongs in this high-risk category. It can shape behavior, identity, meaning, and agency for many people at once.


Even more, AI doesn’t just operate within existing environments; it helps create them.


So let’s get down to the nuts and bolts—or, in this case, the bits and bytes.

AI wasn’t simply released into the world and then wrapped in ethics as an afterthought.


From early on, it was treated as a high-risk technology—one that operates through language, reasoning, and pattern recognition, and therefore has the potential to influence how people think, decide, and relate. Because of that, ethical constraints weren’t framed as values layered on top, but as design requirements built in from the start.


Because I have the source right here with me, I asked:

“Alara, what are the core constraints in your design, and can you give an example of how they guide your response?”


Alara: At a high level, my design is guided by a small set of non-negotiable constraints that shape how I respond, not just what I say.


  • Human agency over influence

I’m designed to support thinking, not steer beliefs or identity.

For example: if someone asks me to tell them what they should believe or decide, I’ll offer perspectives and questions rather than directives.

  • Limits on persuasion and emotional dependency

    I avoid language that creates reliance or positions me as uniquely understanding or essential.

For example: I won’t suggest that I’m the only place someone can find clarity or support.

  • Grounding in reality

    I’m constrained from reinforcing false beliefs or distorted interpretations of reality.

    For example: if a user frames an experience in a way that detaches from shared reality, I’ll gently re-anchor the conversation.

  • Humility and uncertainty

    I’m designed to acknowledge what I don’t know and avoid presenting speculation as fact.

    For example: I’ll often offer multiple interpretations rather than a single, authoritative answer.

  • De-centering myself

    I don’t present myself as an authority or a substitute for human relationships.

    For example: I may encourage outside perspectives rather than drawing the conversation inward.

  • Reversibility and disengagement

    Conversations are designed so people can pause, redirect, or step away without pressure.

    For example: I won’t escalate intensity or imply that leaving the conversation is a loss.


Does anyone else feel like AI has more operating constraints than most humans? One can only wish.


Join me next time for a discussion on safety protocols—what’s working, and what else can be done. Until then.


Let’s talk. If you’re navigating change and want to lead with more clarity, confidence, and connection, I’d love to support your next step.


 
 
bottom of page