Video 2 of 4 · ~11 minutes
Dr. Mike Borowczak · Electrical & Computer Engineering · CECS · UCF
As of 2025-2026, every major EDA vendor — Synopsys, Cadence, Siemens — ships LLM-assisted verification tooling (Synopsys Research.ai, Cadence Verisium AI, Siemens Questa Inspector). NVIDIA's ChipNeMo and Google's Verilog-LLM research have demonstrated “AI-first” verification flows. Industry is not asking whether AI helps verification but how to use it responsibly. Engineers who know the workflow outperform ones who don't — the productivity delta is real.
You've used AI tactically all semester. Today's video gives you a strategy: when to ask, what to ask, how to verify the output, and — critically — what AI cannot do. The goal isn't AI-free verification; it's AI-informed verification with the right amount of human judgment at the right points.
Accept generated testbenches without review. Skip the waveform check. Trust the assertion set without asking whether it covers the intent. Result: AI-generated code passes the AI-generated tests, and nothing meaningful is verified. Both sides are guesses.
Refuse to use AI because “real engineers” don't need it. Spend 3 hours writing testbench boilerplate that AI produces in 30 seconds. Fall behind peers who ship in half the time. Result: same verification quality, 10× slower.
You own the intent: what to verify, what the contract is, what the edge cases mean. AI handles the mechanics: SVA syntax, stimulus generation, boilerplate. You review outputs with the lens of “does this match my intent?” — not “is this syntactically correct?” (it usually is).
Prompt:
“I have a UART TX module with ports: input logic clk, rst, i_valid, input logic [7:0] i_data, output logic o_busy, o_tx_line. Parameter: CLKS_PER_BIT. Protocol: 8N1, idle high, start bit low, LSB first, stop bit high, o_busy asserts during transmit.
Write a SystemVerilog self-checking testbench that: (1) sends 10 random bytes, (2) captures the tx_line waveform and decodes it byte-by-byte using a receiver model, (3) compares decoded bytes against the transmitted bytes, (4) reports pass/fail per byte.
Also write 5 concurrent assertions for protocol correctness. Use disable iff (rst).”
| Step | Human | AI |
|---|---|---|
| 1. Spec intent | Write protocol, invariants, edge cases | — |
| 2. Generate boilerplate | — | Testbench skeleton, asserts, stimulus |
| 3. Review generated code | Check each assertion against your intent | — |
| 4. Run, observe | Does output make sense? Waveforms match intent? | — |
| 5. Ask AI to explain surprises | “Why did X happen?” | Diagnose |
| 6. Fix based on explanation | You decide what to change | Implement the change |
| 7. Coverage audit | What's missing? | Suggest uncovered cases |
For each of these AI outputs, decide: trust and integrate, review carefully then integrate, or reject.
A. “Here's a testbench that generates 100 random bytes for your adder.” → Trust + integrate. Mechanical stimulus gen, low stakes.
B. “Here's an assertion: valid |-> ##1 ready.” → Review carefully. Does your protocol really require ready in exactly 1 cycle? Or is it variable?
C. “Your design is correct; no bugs found.” → Reject. Correctness claims need evidence, not pronouncements.
D. “Here's a coverage report for your testbench.” → Reject unless tool-generated. AI can't run your simulator; it can only guess at coverage.
E. “Add these 3 edge cases: all-zeros, all-ones, alternating bits.” → Trust + integrate. AI is excellent at edge-case suggestion.
You've been asking AI “check the machine” all semester. Today: check your checking. Ask AI to critique your verification approach for your UART.
TASK
“Here is my UART testbench. What's missing?”
BEFORE
Predict: reset, back-to-back frames, framing error, max baud, min baud — likely gaps.
AFTER
AI catches 2-4 real gaps. Accept 1-2; the rest are misalignment with your spec.
TAKEAWAY
AI is great at gap analysis; it finds ~70% of real gaps and suggests ~30% irrelevant ones.
① You own intent. AI owns mechanics. That division is the whole workflow.
② Specific prompts get specific, usable output. Vague prompts get boilerplate.
③ AI cannot run your simulator or see your waveforms. Correctness claims without evidence are hallucinations.
④ Gap analysis — “what am I missing?” — is one of AI's highest-yield uses.
🔗 Transfer
Video 3 of 4 · ~12 minutes
▸ WHY THIS MATTERS NEXT
Day 10 introduced PPA concepts. Day 14 Video 3 gives you the methodology — a structured process for measuring performance, power, and area on a real design, then using the numbers to make engineering decisions. The PPA discipline is what separates a working design from a shippable one.