Day 14 · Advanced Verification & Road Ahead

AI-Assisted Verification Workflows

Video 2 of 4 · ~11 minutes

Dr. Mike Borowczak · Electrical & Computer Engineering · CECS · UCF

AssertionsAI VerificationPPARoad Ahead

🌍 Where This Lives

In Industry

As of 2025-2026, every major EDA vendor — Synopsys, Cadence, Siemens — ships LLM-assisted verification tooling (Synopsys Research.ai, Cadence Verisium AI, Siemens Questa Inspector). NVIDIA's ChipNeMo and Google's Verilog-LLM research have demonstrated “AI-first” verification flows. Industry is not asking whether AI helps verification but how to use it responsibly. Engineers who know the workflow outperform ones who don't — the productivity delta is real.

In This Course

You've used AI tactically all semester. Today's video gives you a strategy: when to ask, what to ask, how to verify the output, and — critically — what AI cannot do. The goal isn't AI-free verification; it's AI-informed verification with the right amount of human judgment at the right points.

⚠️ Two Failure Modes

❌ “AI will do it for me”

Accept generated testbenches without review. Skip the waveform check. Trust the assertion set without asking whether it covers the intent. Result: AI-generated code passes the AI-generated tests, and nothing meaningful is verified. Both sides are guesses.

❌ “I'll avoid AI entirely”

Refuse to use AI because “real engineers” don't need it. Spend 3 hours writing testbench boilerplate that AI produces in 30 seconds. Fall behind peers who ship in half the time. Result: same verification quality, 10× slower.

✓ Informed collaboration

You own the intent: what to verify, what the contract is, what the edge cases mean. AI handles the mechanics: SVA syntax, stimulus generation, boilerplate. You review outputs with the lens of “does this match my intent?” — not “is this syntactically correct?” (it usually is).

👁️ I Do — A Good Verification Prompt

Prompt:

“I have a UART TX module with ports: input logic clk, rst, i_valid, input logic [7:0] i_data, output logic o_busy, o_tx_line. Parameter: CLKS_PER_BIT. Protocol: 8N1, idle high, start bit low, LSB first, stop bit high, o_busy asserts during transmit.

Write a SystemVerilog self-checking testbench that: (1) sends 10 random bytes, (2) captures the tx_line waveform and decodes it byte-by-byte using a receiver model, (3) compares decoded bytes against the transmitted bytes, (4) reports pass/fail per byte.

Also write 5 concurrent assertions for protocol correctness. Use disable iff (rst).”

My thinking: Five things this prompt does right: (1) full port list, (2) parameter named, (3) protocol spec explicit, (4) acceptance criteria explicit (random bytes, decode, compare), (5) scope explicit (testbench + 5 assertions). Asking for “a testbench for my UART” without these would produce generic output. Specificity = quality.

🤝 We Do — The Verification Review Loop

StepHumanAI
1. Spec intentWrite protocol, invariants, edge cases
2. Generate boilerplateTestbench skeleton, asserts, stimulus
3. Review generated codeCheck each assertion against your intent
4. Run, observeDoes output make sense? Waveforms match intent?
5. Ask AI to explain surprises“Why did X happen?”Diagnose
6. Fix based on explanationYou decide what to changeImplement the change
7. Coverage auditWhat's missing?Suggest uncovered cases
Together: AI handles steps 2, 5, 6-partial, 7. You handle 1, 3, 4, 6-decision. The intent-vs-mechanics split is the entire workflow.

🧪 You Do — Which AI Output Do You Trust?

For each of these AI outputs, decide: trust and integrate, review carefully then integrate, or reject.

A. “Here's a testbench that generates 100 random bytes for your adder.”→ Trust + integrate. Mechanical stimulus gen, low stakes.

B. “Here's an assertion: valid |-> ##1 ready.”→ Review carefully. Does your protocol really require ready in exactly 1 cycle? Or is it variable?

C. “Your design is correct; no bugs found.”→ Reject. Correctness claims need evidence, not pronouncements.

D. “Here's a coverage report for your testbench.”→ Reject unless tool-generated. AI can't run your simulator; it can only guess at coverage.

E. “Add these 3 edge cases: all-zeros, all-ones, alternating bits.”→ Trust + integrate. AI is excellent at edge-case suggestion.

🔧 What AI Cannot Do

  • Run your simulator. It can't observe behavior. Any “I ran it and it works” is hallucination.
  • Know your specific tools. Xilinx Vivado 2024.2 vs iverilog 12.0 have different SVA support. Verify against your actual flow.
  • See your waveforms. It can reason about what should happen. It cannot tell you what did.
  • Guarantee completeness. “I wrote 10 assertions” does not mean 10 is enough. Coverage analysis requires tool instrumentation.
  • Understand your product intent. If your UART is specifically for a low-latency protocol, AI doesn't know that unless you tell it.
  • Catch its own mistakes. Always read the code. Always run it.
Rule of thumb: AI is excellent at the 80% of verification that's mechanical. The 20% that requires product judgment, observation, and discipline is still yours.

🤖 Check the Machine (Meta-Check)

You've been asking AI “check the machine” all semester. Today: check your checking. Ask AI to critique your verification approach for your UART.

TASK

“Here is my UART testbench. What's missing?”

BEFORE

Predict: reset, back-to-back frames, framing error, max baud, min baud — likely gaps.

AFTER

AI catches 2-4 real gaps. Accept 1-2; the rest are misalignment with your spec.

TAKEAWAY

AI is great at gap analysis; it finds ~70% of real gaps and suggests ~30% irrelevant ones.

Key Takeaways

 You own intent. AI owns mechanics. That division is the whole workflow.

 Specific prompts get specific, usable output. Vague prompts get boilerplate.

 AI cannot run your simulator or see your waveforms. Correctness claims without evidence are hallucinations.

 Gap analysis — “what am I missing?” — is one of AI's highest-yield uses.

Let AI write the testbench. You decide what the testbench means.

🔗 Transfer

PPA Methodology

Video 3 of 4 · ~12 minutes

▸ WHY THIS MATTERS NEXT

Day 10 introduced PPA concepts. Day 14 Video 3 gives you the methodology — a structured process for measuring performance, power, and area on a real design, then using the numbers to make engineering decisions. The PPA discipline is what separates a working design from a shippable one.