Amazon Bedrock in a Real-Time Public Safety Platform

It's 2 AM. A wildfire is spreading fast, jumping containment lines. An incident commander is managing 12 agencies, coordinating vehicle staging, tracking road closures, and watching evacuation zones fill with people trying to leave. They need to push a public alert. They open a blank text box and start typing.

That blank text box is the problem we solved.

Perimeter is the real-time public safety platform Akadenia built from the ground up. It's used by 280+ agencies across 10+ US states and 30+ counties — protecting over 4.5 million lives (as of early 2026). We integrated Amazon Bedrock to automatically draft evacuation alerts and situation reports from live incident data, so commanders can stay in the operational picture instead of switching to writing mode. Here's how we built it, what we learned, and the guardrails that make it safe to use in actual emergencies.


The Problem: Manual Alert Drafting During Active Incidents

Perimeter gives emergency managers a shared, real-time common operating picture. They use Perimeter Plan to pre-plan evacuation zones and clearance sequences before an incident. When something happens, they switch to Perimeter Map — a real-time collaborative map shared not just between agencies, but with the public via a shareable link that requires no app download.

During an active incident, the map is live. Evacuation zones are drawn. Populations are estimated. Routes are identified and prioritized. All of that structured data is sitting right there in the system.

But communicating to the public was still manual. An incident commander managing a major wildfire might need to draft five, ten, or fifteen public alerts over the course of an incident — updates as the fire moves, route changes as roads close, new evacuation orders as zones expand. Each one required stopping, opening a text editor, and writing from scratch under extreme cognitive load.

The ideal scenario was obvious: the system already has the data. It knows the hazard type, the affected zones, the recommended routes, and the estimated population. Why not let AI draft the alert from that data, and let the human review and approve it in seconds rather than write it from scratch in minutes?


Why Amazon Bedrock

When we evaluated Gen AI options, Bedrock was the right choice for three concrete reasons.

Pay-per-request pricing. Emergency platforms have extremely bursty usage. During an active wildfire, usage spikes hard. Between incidents, usage drops to near zero. An always-on GPU cluster — or even a managed endpoint with minimum capacity — would waste money during the long quiet stretches that define most of a platform's operational time. Bedrock's pay-per-request model means costs scale directly with actual emergency activity. A major incident generating 30 alerts costs dollars, not hundreds of dollars.

No infrastructure to manage. We're already running three ECS Fargate microservices, an EC2 Cache Controller, RDS Postgres with read replicas, and ElastiCache Redis. Adding model hosting would have meant a new class of infrastructure to monitor, patch, and scale. Bedrock is fully managed — we call an API, we get a response, we don't think about GPU utilization.

AWS-native trust boundary. Perimeter runs on AWS across two regions (us-west-2 and us-east-1). We already had VPC isolation, IAM roles per microservice, GuardDuty for threat detection, Security Hub for compliance posture, and WAF on all ALBs. Bedrock fits natively inside that trust boundary. The Bedrock API call from our Lambda function stays inside the VPC using a VPC endpoint — it never traverses the public internet. The Lambda's IAM execution role gets a policy scoped to exactly the Bedrock models it needs; the Internal API's IAM role only needs permission to invoke that Lambda. No new trust boundaries, no new secrets to manage.


Architecture: Where Bedrock Lives in the Stack

Perimeter's backend is three ECS Fargate services behind Application Load Balancers:

  • Public API (Fastify/Node.js) — serves the public-facing map and incident data
  • Internal API (Express.js/Node.js) — core business logic for first responders
  • Estimator API (FastAPI/Python) — traffic simulation and population estimation

Bedrock calls originate from the Internal API — but not directly. That's where all the incident state lives: zone boundaries, hazard type, route assignments, population estimates from the Estimator API. When a commander requests a draft alert, the Internal API assembles the relevant structured data from the current incident and invokes an AWS Lambda function, which makes the actual Bedrock inference call. Keeping the Bedrock call in Lambda isolates the long-running inference from the ECS service, handles timeouts cleanly, and keeps the IAM boundary tight.

The flow looks like this:

Live Incident Data (zones, hazard, routes, population)
        ↓
  Internal API (Express.js on ECS Fargate)
        ↓
  AWS Lambda Function
        ↓
  Amazon Bedrock (Anthropic Claude / foundation model)
        ↓
  Draft Alert (labeled "AI Draft" in the Perimeter UI)
        ↓
  Incident Commander → Edit → Approve
        ↓
  Published Alert (sent to agencies + public map)

The draft appears in the Perimeter interface clearly labeled AI Draft. The commander reads it, makes any edits, and either approves it for distribution or discards it. Nothing publishes without that human step.


Prompt Engineering for Emergency Content

Emergency alerts have specific requirements that general-purpose prompting doesn't naturally satisfy. They need to be:

  • Plain language — no jargon, no passive voice, no bureaucratic hedging
  • Geographically specific — "Evacuate the downtown core north of First Street" not "certain areas may be affected"
  • Action-first — the most important instruction leads: "Evacuate NOW"
  • Route-explicit — tell people where to go, not just that they should leave
  • Short — under 200 words for easy reading on a phone screen during an emergency

Our first prompts produced outputs that were technically accurate but sounded like government press releases. Verbose, passive, too many qualifiers. We iterated.

Here's the structure we landed on:

You are assisting an emergency manager during an active incident.
Generate a public evacuation alert based on the following incident data.

Incident data:
- Hazard type: {hazardType}
- Affected zones: {zoneNames}
- Recommended evacuation routes: {routes}
- Estimated affected population: {populationEstimate}
- Incident status: {status}

Requirements:
- Plain language — write for a general audience, not a government document
- Lead with the most critical action ("Evacuate NOW from [area]...")
- Include specific route guidance in the body
- Do not use passive voice or bureaucratic language
- Keep under 200 words
- End with where to find updates

This alert will be reviewed by an incident commander before publishing.
Write the draft alert now.

The key improvements over our initial attempts: naming the output format explicitly ("evacuation alert"), specifying the tone constraint ("not a government document"), and including the reminder that a human will review it before it goes out. That last instruction subtly shifted the outputs from overly hedged language toward more confident, actionable drafts.

We also added a few-shot example of a good alert for new incident types we hadn't seen before, which consistently improved output quality when the hazard type was less common (civil unrest, hazmat).


Guardrails: AI as Co-Pilot, Not Autopilot

This is the most important part of the implementation.

A wrong evacuation alert during an active emergency can cause real harm: mass confusion, people evacuating toward danger instead of away from it, agencies receiving conflicting information, communities losing trust in the platform. Getting the AI part right matters less than getting the human-in-the-loop part right.

Every Bedrock output is a draft. Always. There is no path in the codebase where a Bedrock response publishes directly to the public map or agency notifications. The draft sits in the UI, clearly labeled, until a human explicitly approves it.

Full edit control. Commanders don't just approve or reject — they can edit the draft word by word before approving. The AI provides a starting point, not a final answer.

Audit logging. Every AI-generated draft, every edit made to it, and every final published version is logged with timestamps and the identity of the approving commander. This feeds directly into after-action review documentation and supports FEMA reimbursement filing, which requires detailed records of communications during declared incidents.

Graceful degradation. If Bedrock is unavailable — network issue, rate limit, service disruption — the draft button simply doesn't appear and commanders use the manual flow exactly as they did before the feature existed. There's no degraded mode that partially works. It either works or it falls back to manual, cleanly.

Cost controls. We set token limits per request and use Bedrock's built-in quotas to prevent runaway usage during large-scale incidents where many commanders might be requesting drafts simultaneously.


Results

The numbers are straightforward, measured across the first six weeks of live deployment. Alert drafting time dropped from 3–5 minutes to under 30 seconds. Commanders spend that time reviewing and approving rather than staring at a blank text box.

The less quantifiable result is what agencies told us: during multi-agency incidents, the reduction in cognitive load is significant. Writing an alert requires switching from "operational brain" to "communications brain" — you have to step out of the real-time picture to construct language. The draft eliminates that context switch. Commanders stay in the map, review the draft in the same interface, make a few edits, and approve. The operational rhythm doesn't break.

We've had zero incidents where an AI-generated alert was published without human review.


Lessons Learned

Start with the least risky use case. Drafting is lower stakes than publishing. We built the guardrails into the architecture first, before worrying about output quality. Getting the human-in-the-loop design right was more important than getting the AI output perfect.

Prompt engineering is real engineering. We budgeted two weeks for prompt iteration. We needed it. Emergency alert language has specific conventions that generic prompts don't produce. Treat it like building any other feature — specification, iteration, testing against real examples, and ongoing refinement.

AWS-native Gen AI has a real advantage. The VPC endpoint, IAM roles, and security posture alignment were not minor conveniences. For a government platform serving public safety agencies, the ability to say "Bedrock calls never leave our VPC" and "access is controlled by the same IAM policies as everything else" matters during procurement conversations and security reviews.

The guardrails matter more than the feature. Every conversation we've had with agency administrators about the AI drafting feature has led with "how does this prevent bad alerts from going out?" The technical answer is: the human approves each one. The design answer is: we made the human approval step the fastest part of the flow, so there's no incentive to skip it.

What's next: We're expanding Bedrock usage to generate situation report drafts, after-action summaries, and FEMA reimbursement documentation — all structured enough to benefit from AI drafting, all high-stakes enough to require human review before finalization. The pattern scales.


Perimeter is the platform emergency managers use when they need to move fast and get it right. Amazon Bedrock helped us build a feature that saves time in those moments — without removing the human judgment that those moments require.

If you're building on AWS and evaluating Gen AI use cases, the architecture questions — where does the AI call live, how does human review integrate, what happens when the API fails — are worth answering before you write a single prompt.

If you're exploring how GenAI fits into your product, Akadenia offers AI consulting and cloud managed services on AWS — the same stack we used here.

Written by

Engineering Team

Engineering Team

Development

Our engineering team is a group of highly skilled and experienced software engineers with a passion for building high-quality web and mobile applications. They are dedicated to creating reliable, scalable, and user-friendly software solutions that meet the needs of our clients.

Tap a star to rate

More posts

Bits, Bytes and Qubits—Here Comes the Quantum Computer

Bits, Bytes and Qubits—Here Comes the Quantum Computer

Exploring the Quantum Frontier: Journey into advanced computing

Aug 21, 2023
Customizable map styles in React Native and Mapbox

Customizable map styles in React Native and Mapbox

Mapbox Guides: A look into map styles

Apr 10, 2023