Flight Lab didn't start as an app. It started as a research question: what should AI actually do in education, and what should it refuse to do?
Answering that produced a vision document, a set of principles, and a long list of decisions — many of them choices not to do something. This page surfaces the parts of that work most relevant to Flight Lab itself.
Three sections: Vision · Principles · Decisions.
Grounded in learning science, the vision argues for a narrow set of high-leverage opportunities and strict guardrails against well-documented risks. The headline idea: AI's greatest unforced error is giving users the exact answer they ask for. Real learning requires desirable difficulty — retrieval, self-explanation, spacing, productive struggle. AI that removes friction doesn't accelerate learning; it creates the illusion of learning while cognitive debt accumulates.
The full vision organizes into seven themes. Flight Lab leans hardest on three of them:
The product preserves friction by design. The kid folds before the app explains. No summaries, no answers handed back, no bypass.
AI as an adaptive tutor that reduces extraneous load but never removes the core cognitive work. Socratic prompts, not lectures.
Non-personified. No mascot, no streaks. The subject matter is the protagonist. Technology disappears into the experience.
True innovation will come from AI that brilliantly curates difficulty, not AI that eliminates it.— Friction-by-design philosophy
These surfaced while designing the Flight Lab core loop. Each one is paired with the theme from the vision it most naturally extends.
The AI speaks only when there's something specific to say. It does not narrate, fill silence, or chat. Quiet-by-default is itself a pro-human choice — silence is where the learner thinks.
Extends Theme 7 (Pro-Human Interface).
Assume capability. A 4-year-old can fold a paper airplane without a tutorial; the app should not pre-scaffold tasks the kid can figure out by doing. No saccharine encouragement, no kid-ified baby-talk, no patronizing tone.
Extends Theme 7 (Pro-Human Interface).
The wow comes from physics becoming visible — an engineering-drawing callout on a kid's own plane, a force arrow landing on a wing — not from confetti, badges, stickers, or animated mascots.
Extends Theme 7 (Pro-Human Interface).
Teaching order runs experience → principle, not principle → example. Kids observe an asymmetric plane spinning, then learn 'matching wings,' then in a later session learn lift as the deeper why.
Extends Theme 1 (Desirable Difficulties) and Theme 2 (Active Scaffolding).
AI changes the economics of concrete examples. Pre-AI, the textbook gives one generic example and hopes it lands. With AI, every learner can anchor the same abstract concept to their specific curiosity graph — dragons, soccer, trains, ballet.
Extends Theme 2 (Adaptive tutoring).
The kid's first draft IS the diagnostic. Review and grounding happen only after the first attempt and its visible outcome. This enforces retrieval, makes mistakes essential, and prevents premature help by design.
Extends Theme 1 (Desirable Difficulties).
The learning experience is about the subject matter — paper airplanes, fractions, Shakespeare — not about AI. A kid finishes a session and remembers what they learned about flight, not that the app talked to them. The model is plumbing; the subject is the protagonist.
Extends Theme 7. Contrasts the dominant market framing of AI-centered edtech.
Ordered roughly by how load-bearing the decision is for the product. Many of these are choices not to do something.
Lift as a concept is necessary to explain asymmetric flight, but we don't lead with it. Session 1 grounds the observation (the plane spins); the concept of lift is planted for a later session. Order matters more than coverage.
Early designs wanted two issues in one demo — asymmetry plus airflow disruption. Cognitive load wins. One concept, one artifact, one fold. Spacing happens across sessions.
The app is about paper airplanes. AI is plumbing. No 'AI-powered' badges, no mascot, no model-name callouts inside the experience. The kid remembers flight, not the technology.
Ages 4–8 don't want to read. Short voice prompts, paired with monochrome pictogram loops that show motion. The phone is a tool, not the experience.
One calm instructional voice. Not a character. Not a friend. Non-personification protects against emotional attachment to algorithms and keeps the subject matter on stage.
The app isn't trying to replace a caregiver. It assumes an adult is nearby. This reframes safety, handles friction points, and cuts a whole class of features (over-engineered error recovery, social loops, etc.).
An early mockup used a schematized diagram of the plane. The pedagogically stronger choice is annotating the kid's actual photo — their object, their hand-fold, their outcome. Schematization is reserved for later sessions once the concept is anchored concretely.
No points, badges, streaks, collectibles, or progress meters. Intrinsic motivation comes from the plane flying or not flying. If the kid wants to go again, that's the signal.