AI at Work:
Making Informed, Responsible Decisions

A scenario-based learning module that helps people pause, verify, and communicate responsibly when using AI at work.


Seat time: ~15–25 minutes  •  Audience: non-technical workplace learners  •  Built with: Articulate Storyline 360

Tip: Cards expand on click (or hover on desktop). Click images to zoom.

AI at Work module preview

Case Study Snapshot

This project focuses on a single challenge: helping people use AI with judgment and accountability. The snapshot below summarizes my role, the learning format, the toolset, and how I defined success.

My Role

Lead Instructional Designer & Developer (end-to-end)

Owned learning strategy, scenario design, interaction patterns, microcopy, visual consistency, and the Storyline build—plus pilot facilitation and revision planning.

Format

Multi-Scene Storyline 360 scenario module

Convergent flow with targeted feedback—so learners explore consequences without getting lost in branching complexity.

Tools

Articulate Storyline 360 • Canva • ChatGPT

Storyline for interaction and logic, Canva for visual/asset design (backgrounds and UI elements), and ChatGPT to speed up early storyboards and copy drafts while keeping final content human-reviewed.

Success Criteria

Clarity over cleverness

Learners should quickly understand what’s being asked, why a choice is safer or riskier, and what to do next—without feeling “tricked” or graded.

Overview

The goal isn’t to teach a list of rules. It’s to build a repeatable habit people can use under time pressure: evaluate → verify → proceed.

The Problem

In many workplaces, “use AI” becomes the default. But people aren’t always given a shared standard for what’s safe, what needs review, and what should never be pasted into a tool.

The risk isn’t only confidentiality—it’s also unreviewed inputs, over-confident outputs, and unclear accountability.

The Learning Strategy

Scenario-first practice that builds decision judgment without “gotcha” scoring.

Learners choose an action, see consequences, then get a safer alternative that models the better move (including when to verify and how to communicate uncertainty).

Design Decisions

I kept the experience intentionally calm and consistent so learners focus on thinking—not figuring out the interface.

  • Convergent flow to prevent branching sprawl
  • Consistent checkpoint prompts (same structure every time)
  • Feedback that explains the “why” and provides a better alternative

Evidence (Pilot-Driven)

I ran a small, cross-functional pilot to validate clarity, flow, and decision framing before portfolio release.

  • Rapid pilot: 20–30 minute passes with reviewers across UX, content, and product.
  • Focused QA: look for hesitation, wording confusion, and navigation friction.
  • Iterate: prioritize fixes that reduce cognitive load and strengthen the Decision Lens adoption.

My Process

I approached this like a real deliverable: define the decision problem, prototype quickly, test for clarity, refine interaction rules, and ship a polished release.

Scenario and flow design artifact

1) Scenario & Flow Design

What I did: turned real workplace “decision moments” into a teachable, repeatable flow.

Why it mattered: reduced ambiguity—learners understood what the scenario was asking and why it mattered.

Interaction and UI system style guide

2) Interaction & UI System

What I did: built consistent patterns (states, prompts, feedback layers, navigation cues).

Why it mattered: less friction. Testers spent time thinking about decisions—not hunting for the next click.

Iteration snapshot showing changes

3) Pilot, Triage, Refinement

What I did: gathered tester feedback, grouped issues, prioritized fixes, and retested.

Why it mattered: improved clarity and confidence without turning the experience into a compliance lecture.

Iteration Evidence & Outcomes

This module was piloted with cross-functional peers prior to release to validate clarity, interaction flow, and adoption of the Decision Lens. Below: a compact, evidence-forward view of what the pilot revealed and the targeted fixes that followed.

Pilot Highlights (anonymized)

Representative tester notes:

“It took me a second to understand what each prompt/choice was asking.”
“Some examples felt high-level — not sure what the right answer hinged on.”
“Text felt squished into the boxes, and a Continue state didn’t always highlight.”

These short quotes represent recurring patterns across pilot passes: clarity gaps, inconsistent terminology, visual density, and minor interaction friction.

Targeted Changes (V1 → Final)

Language & Framing

  • Standardized terminology across the module (single preferred phrasing for AI actions).
  • Rewrote prompts to anchor decisions in specific workplace context (reduced abstraction).
  • Clarified the Decision Lens with one-line examples to make it reusable in-scenario.

Design & Interaction

  • Increased spacing and contrast for readability; adjusted color contrasts where blue-on-blue washed out.
  • Fixed navigation states (Continue / Back) and smoothed transitional timing to remove jarring flashes.
  • Added explicit “verify output” checkpoints and shortened feedback copy to reduce cognitive load.

Observed Signals After Retest

  • Fewer clarification questions during checkpoints (testers progressed with less hesitation).
  • Testers cited the Decision Lens when explaining their choices—evidence of habit-building language adoption.
  • Navigation complaints (Back/Continue) dropped after fixes; visual scanning improved in UX pass.

Selected Interactions

These interaction examples demonstrate how the scenario architecture translates into practice — guiding learners from decision to feedback to structured reinforcement.

Helpful vs Risky Classification

Learners sort realistic prompts into Helpful, Risky, or Gray Area—then receive feedback that explains the tradeoffs in plain language.

Decision Checkpoint

A choose-your-action moment where learners commit, see consequences, then review a safer alternative that models better judgment and verification.

Decision lens framework interaction

Decision Lens (Reusable Framework)

A consistent decision lens—Purpose → Input → Output → Impact—used across scenarios so learners build a habit, not just knowledge.

Want to see the full experience? Use View Live Experience at the top of this page.

Let’s Discuss Opportunities