Train Your Viral Baseline
Upload viral videos with their IG/TikTok link and stats. The brain learns what viral looks like for each category.
The Science Behind the Scan
01
You feed the brain real viral videos.
Every video gets processed by TRIBE v2 — a brain prediction model built by Meta FAIR. Trained on 1,000+ hours of real fMRI brain scans from 700+ people.
Scientifically: TRIBE v2 uses a multimodal transformer (V-JEPA2 + LLaMA 3.2 + Wav2Vec-BERT) to predict the BOLD fMRI signal across 20,484 cortical vertices — one prediction per second.

02
The brain maps which regions light up.
Reward centers when something feels good, amygdala when emotions hit, visual cortex when something catches your eye. Every viral video creates a unique activation pattern.
Scientifically: the model predicts hemodynamic response (BOLD signal) at each cortical vertex. We extract from 6 regions: nucleus accumbens, amygdala, occipital cortex, anterior cingulate, TPJ, and hippocampus.
10M
views
50K
comments
03
You tell it the real numbers.
Views and comments from Instagram or TikTok tell the system how viral each video was. 10M views = stronger signal. The brain learns what millions of views looks like.
Scientifically: we build a weighted baseline per category by averaging region activation vectors across all training videos — the mean activation pattern of proven viral content.
04
More training = smarter brain.
5 videos = rough baseline. 20 = a pattern. 100 and the brain knows exactly what viral looks like. It compares your video against the proven pattern.
Scientifically: with more samples, the baseline converges toward a stable distribution. The viral score is the cosine similarity between your activation vector and the category baseline, scaled 0-100.
Powered by Meta FAIR's TRIBE v2 brain encoding model · View the research
