v2.0
AI ANALYSIS ACTIVE

Vibe Code Forensics Engine

Know if it was written
by AI in 3 seconds.

Analyze websites, GitHub repos, documents, and pasted text using a local forensic pattern engine. No login. No data stored.

18+Signal Checks
100%Browser-Side
0msServer Latency
FreeAlways
Used by developers, students & hiring teams
Private by design
Runs fully in your browser
No tracking ever
Smart fetch: We try 4 different methods to grab the page text. Works on most open articles & blogs. Big platforms (Google, Twitter/X, Facebook, Instagram) always block browser fetches — use the Paste Text tab for those by copying text from the page manually.

Uses the public GitHub API — no proxy, no CORS issues. Fetches README + up to 10 code files and analyses them. Works on any public repo with no API key required.
Repo owner avatar

0 chars

Drop file here

or click to browse

No file selected

● EXAMPLE — Run a scan to see your result
72%

VIBE CODED

Strong AI pattern consistency detected across structure, linguistic and entropy signals.

↑ Pick a tab above and run a scan to analyze your own content

// AI DETECTION SCORE

Run an analysis to see results.

HUMANAI

Awaiting analysis…

Signal Breakdown

ANALYSIS SUMMARY


How It Works

Three input methods. One heuristic engine. Zero data ever leaves your browser.

Text Pattern Analysis

Scans for repetitive sentence structures, AI filler phrases, passive voice saturation, and uniform paragraph lengths — hallmarks of LLM output.

Code Signal Detection

Flags perfectly uniform indentation, generic variable names, missing TODO comments, and excessive docstring coverage typical of AI-generated code.

Weighted Scoring

Each signal is weighted by impact. Final score uses tanh normalisation: 30 + 45 × tanh(raw/30) to keep results honest.

Score Categories

Vibe Coded

> 72

Strong AI patterns

Suspicious

50 – 72

Multiple AI signals

Mixed Signals

35 – 50

Inconclusive

Certified Human

< 35

Human-authored

Methodology

Forensic Scoring Logic

Our model combines structural code analysis, linguistic pattern recognition, and statistical entropy measurement to identify signals commonly associated with AI-generated or human-written content.

You're not detecting "AI". You're detecting patterns that strongly correlate with AI-assisted generation. That distinction makes it credible and defensible.

Pillar 01

Structure Analysis

AI code tends to be overly structured and symmetric. We measure formatting consistency, over-modularization, template repetition, and boilerplate density.

Perfect indentation +8 Over-modular +6 Template repeat +7 Messy hacks −10 TODO/FIXME −8

Pillar 02

Linguistic Patterns

AI writes comments and prose differently. We scan for textbook-style explanations, filler phrases, and the absence of personality, frustration, or humor.

Filler phrases +12 Passive voice +9 Hedge language +6 Slang/informal −12 Personal voice −10

Pillar 03

Engineering Behaviour

Real development leaves fingerprints: incremental commits, weird dependency choices, hotfixes, renamed files. AI repos are too clean, too complete, too fast.

Generic var names +11 Every fn docstrings +9 No debug leftovers +8 Inconsistent naming −8

Pillar 04

Entropy & Randomness

Human code has chaos: naming entropy, line length variance, random spacing quirks, unusual comment structures. Statistically smooth output signals AI generation.

Uniform para lengths +10 Consistent sentence len +7 Low vocab variety +9 High entropy −10
🔥

Roast My Repo

Run a scan above, then get a savage honest verdict about your code. Developers will share this instantly.

"Your repo feels 82% vibe coded. Bro, did you even touch this code?"