Show HN: ZTGI-AC – An AI that checks its internal stability before answering

ztgiai.pages.dev

2 points by capter 2 days ago

Hi everyone,

This is a small experimental AI project I’ve been building called *ZTGI-AC*.

Most LLMs generate an answer immediately, but ZTGI-AC does something different: before responding, it runs an internal stability check.

It evaluates: • risk • jitter • dissonance • SAFE/WARN/BREAK modes • INT/EXT gating (self-monitoring loop)

Only after the internal signals stabilize does it generate a reply.

This project explores whether self-evaluation loops can reduce chaotic or unstable outputs in LLM-like systems.

*Demo:* https://ztgiai.pages.dev (Non-commercial, early prototype.)

I’d love feedback from the HN community — especially around: • whether self-monitoring loops are meaningful, • potential improvements to stability metrics, • and how this idea compares to classical alignment approaches.

Thanks for taking a look!

capter 2 days ago

Hi everyone, OP here — thanks for checking out ZTGI-AC!

Happy to answer any questions or discuss the stability loop design. This is an early prototype and I'm exploring:

• whether internal self-monitoring can reduce unstable LLM behaviour • alternative stability metrics (beyond risk/jitter) • how gating (INT/EXT) affects output quality under noisy inputs • ideas for tests or failure modes worth trying

All feedback, criticism, or ideas are welcome!

doppelgunner a day ago

[flagged]

  • capter a day ago

    Thanks for the suggestion! I wasn’t aware of NextGen Tools — I’ll definitely take a look.

    Right now I’m mainly collecting early feedback on the stability loop (risk/jitter/dissonance) and on whether SAFE/WARN/BREAK modes make sense under real inputs.

    If the tool proves useful, launching it there could be a great next step. Happy to hear your thoughts.