TwinMind

Ex-Google X Veterans Secure $6M to Build Your AI Second Brain

Imagine carrying a second brain in your pocket — not a sci-fi implant, but an AI that quietly learns your world by listening in the background with your permission. That’s the idea behind TwinMind, a new “ambient AI memory” app created by three former Google X scientists. The startup has raised $5.7 million in seed funding and launched on Android alongside an updated iPhone app and a powerful new speech model.

Co-founded in March 2024 by Daniel George (CEO) with colleagues Sunny Tang and Mahi Karim (both CTOs), TwinMind turns your spoken life — meetings, lectures, brainstorming sessions, and casual conversations — into structured memory. It builds a personal knowledge graph, then generates context-aware notes, to-dos, summaries, and answers on demand. The app runs on-device, processes audio in real time, and can capture ambient speech for roughly 16–17 hours without crushing your battery, according to the team. You can opt to back up data for recovery if a device is lost, or keep everything strictly local. It also supports real-time translation in more than 100 languages.

What sets TwinMind apart from meeting note-takers like Otter, Granola, and Fireflies is that it records passively in the background all day (again, with user permission), not just during scheduled calls. To make that possible on iPhone, the team engineered a low-level service in native Swift rather than relying on cross-platform frameworks or cloud-heavy processing that struggle to run continuously on iOS.

The spark for TwinMind came when George was leading Applied AI at JPMorgan and drowning in back-to-back meetings. He hacked together a script that captured audio, transcribed it on an iPad, and fed the text into large language models. Over time, the system learned his projects and even produced usable code. Friends wanted the same superpower — just not on their locked-down work machines — which pushed him to build a privacy-first mobile app that could live on a personal phone.

TwinMind also offers a Chrome extension that adds context from your browser. Using vision AI, it can scan open tabs and interpret content from tools like email, Slack, and Notion. The team even used this workflow internally to rank more than 850 internship applicants by opening each LinkedIn profile and CV in tabs, then asking the extension to shortlist top candidates.

Current AI chatbots struggle to ingest hundreds of documents or unify signals across apps like Gmail and LinkedIn, and AI browsers can’t capture knowledge from your offline conversations. TwinMind’s pitch is that it fuses both worlds: it learns from what you say, what you read, and what you work on — continuously and contextually.

The product has quickly found an audience. TwinMind reports over 30,000 users, with around 15,000 active each month. Roughly 20–30% also use the Chrome extension. The United States is the largest market so far, but usage is growing in India, Brazil, the Philippines, Ethiopia, Kenya, and across Europe. About 50–60% of users are professionals, around 25% are students, and the rest use the app for personal goals — including writing life stories and memoirs.

Privacy is central to the design. TwinMind says it does not train its models on user data. Audio is not accessible after the fact; it’s deleted on the fly, while transcribed text is stored locally on your device. The app is built to work fully offline, only using the cloud if you opt in and when it benefits you.

The founding team’s background helped them move fast. Before TwinMind, George worked on multiple early-stage projects at Google X, including an AI earbuds initiative. Earlier, he applied deep learning to gravitational-wave astrophysics with the Nobel Prize–winning LIGO group and completed a PhD in AI for astrophysics at age 24, later joining Stephen Wolfram’s research lab. That early connection came full circle when Wolfram wrote the first check for TwinMind — his first-ever investment in a startup. The $5.7 million seed round was led by Streamlined Ventures with participation from Sequoia Capital and others, valuing TwinMind at $60 million post-money.

Alongside the app, the company introduced TwinMind Ear-3, a next-gen speech model that supports more than 140 languages. The team reports a 5.26% word error rate and a 3.8% speaker diarization error rate, thanks to a fine-tuned blend of open-source models trained on curated, human-annotated data such as podcasts, videos, and films. The broader the language coverage, the better it gets at accents and regional dialects, the company says. Ear-3 runs in the cloud and will be available to developers and enterprises via API at $0.23 per hour. If your internet drops, the app automatically falls back to the fully offline Ear-2 model and switches back to Ear-3 when you’re online again.

TwinMind now offers a Pro subscription at $15 per month with a context window of up to 2 million tokens and 24-hour email support. The free tier remains generous, with unlimited transcription and on-device speech recognition.

The team has grown to 11 and plans to hire designers to refine the user experience and build a business development function to take its API to market. Expect more investment in user acquisition as the company scales.

If you’ve wished for instant, accurate meeting notes, better memory across projects, or a private AI that understands the context of your life, TwinMind is aiming to be that always-on second brain — helping professionals, students, and everyday users stay organized without changing how they work or live.