Drop a slide
Paste with Ctrl+V, drag a PNG/JPG screenshot, or click to pick. Everything is processed in your browser. Nothing leaves your machine.
Drop a slide screenshot and instantly see the audience attention map: Itti–Koch saliency, the F-pattern reading bias, neural face detection, and an optional brain-network cognitive analysis (TRIBE v2). Every signal computed locally — no upload, no AI cost, no waiting.
Paste with Ctrl+V, drag a PNG/JPG screenshot, or click to pick. Everything is processed in your browser. Nothing leaves your machine.
Saliency contrast, F-pattern reading bias and neural face detection are blended into a single attention map, with quadrant scores and a focus index.
The tool tells you whether the slide concentrates attention or scatters it, and where to move the eye to make the message land.
Three peer-reviewed perceptual models combined. No black-box AI: every pixel of the heatmap is explainable.
The classic computational model of pre-attentive visual attention: color, intensity and orientation contrasts at multiple scales, fused into a saliency map.
Eye-tracking research on Western readers shows attention follows an F-shape on screens. We bias the saliency map accordingly to match real-world reading behavior.
A SSD MobileNet variant runs in-browser and finds every face in the slide. Faces are massive attention magnets — the heatmap accounts for them explicitly.