The private AI overlay for interview focus
Zero‑noise guidance for coding and system design interviews. It reads the text you already see and returns short, structured suggestions you can act on fast.
Designed to stay out of shared windows. You control the endpoint, key, and usage.
Problem Statement
You are given a binary tree, return its level order traversal.
My thoughts
- Use BFS with a queue
- Track levels by queue length
Pseudocode
queue = [root]
while queue:
for i in range(len(queue)):
node = queue.pop(0)
push node.val
push children
Solution (Python)
# BFS level order traversal
# Time: O(n), Space: O(n)
from collections import deque
def level_order(root):
if not root: return []
res, q = [], deque([root])
while q:
level = []
for _ in range(len(q)):
n = q.popleft()
level.append(n.val) # capture value
if n.left: q.append(n.left)
if n.right: q.append(n.right)
res.append(level)
return res
Answer like a pro
Clear, interview‑ready structure: Problem → Thoughts → Pseudocode → Solution → Complexity.
Language‑aware
Choose Auto, Python, JS/TS, Swift, Java, C++, C#, Go, Rust, SQL and more.
Private by default
OCR runs locally on your Mac. Only extracted text goes to your Azure OpenAI deployment.
Stays out of shares
Borderless overlay designed to avoid typical screen‑sharing capture. Verify in your setup.
How it works
Capture
Reads visible on‑screen text using Screen Recording API + Vision OCR at your chosen interval.
Think
Sends OCR text to your Azure OpenAI deployment and returns concise, interview‑ready suggestions.
Show
Renders results as Markdown (including code) in an overlay kept out of typical screen shares.
Key Features
- Structured output: Problem → Thoughts → Pseudocode → Solution → Complexity
- Language preference: Auto, Python, JS/TS, Swift, Java, C++, C#, Go, Rust, SQL…
- Live OCR via Apple Vision; configurable interval
- Integrated Preferences: Connection, Capture, Appearance, Regions, Hotkeys
- Global hotkeys (works even when unfocused)
- Movable overlay + Follow Cursor (adjustable X/Y offsets)
- Save and reuse capture regions
Requirements
- macOS 13+
- Azure OpenAI (endpoint, deployment, API key)
Privacy
OCR runs locally. Only extracted text goes to your Azure OpenAI endpoint. Verify screen‑share exclusions with your tools and policies.
FAQ
Does it record or send my screen?
No. OCR runs locally via Apple Vision. Only extracted text is sent to your configured Azure OpenAI endpoint.
Will the overlay appear in screen shares?
Designed to avoid typical screen‑sharing captures, but always verify with your tools and policies.
What models are supported?
Any Azure OpenAI chat‑completions deployment (e.g., GPT‑4o family). You control your endpoint and key.
Request access
ScreenSage is in limited release. No public downloads yet.