← Back to Biology Brain imaging technology used for dream recording and neural decoding research
🧠 Neuroscience: Dream Research

Will We Record Dreams? The Complete Guide to Neuroscience's Revolutionary Dream Decoding Technology

📅 March 15, 2026 ⏱️ 7 min read

Every night, your brain creates 4-6 movies — complete with images, sounds, emotions, even pain. By morning, you remember only fragments. What if we could record them? The idea sounds like science fiction, but a team of scientists at Kyoto achieved something unthinkable in 2013: they decoded the images inside human dreams using fMRI and machine learning. The era of neural dream decoding has begun — and the implications for understanding consciousness, mental health, and human privacy will be profound.

📖 Read more: Fish Recognize Themselves in Mirrors: Breakthrough Study

The REM Discovery: Where It All Began

In 1953, graduate student Eugene Aserinsky and professor Nathaniel Kleitman at the University of Chicago discovered REM sleep (Rapid Eye Movement) — a phase where eyes move rapidly beneath closed lids while the brain shows activity nearly identical to wakefulness. The study, published in Science, marked the birth of modern sleep research. Aserinsky observed his son Armond's eyes during sleep — chance became science. REM comprises 20-25% of adult sleep (90-120 minutes per night), but newborns spend 50% in REM — suggesting a critical role in neural development. Wake someone during REM and they remember dreams 80% of the time — otherwise only 5-10%. Heart rate increases, breathing becomes irregular, and skeletal muscles become paralyzed (muscle atonia) — preventing you from acting out dream movements. REM behavior disorder (RBD), where paralysis fails, leads to violent movements during sleep and serves as an early marker of Parkinson's disease — appearing systematically 10-15 years before motor symptoms, making sleep studies a crucial diagnostic tool.

Brain activity during REM sleep showing neural patterns associated with dreaming

What the Brain Sees When It Dreams

During REM, the visual cortex, amygdala (emotion), and hippocampus (memory) activate — while the prefrontal cortex (logic, judgment) remains nearly dormant. This explains why dreams are visually vivid but logically absurd — you accept flying or talking to the dead without question. Noradrenaline drops to zero in REM — hence dreams lack critical thinking. PGO waves (ponto-geniculo-occipital) create random stimuli that the cortex tries to “explain” — generating narratives from neural noise. Acetylcholine, the neurotransmitter dominating REM, amplifies visual signals while serotonin is suppressed — creating the unique chemical environment of dreams. Cortisol gradually increases throughout the night, peaking in early morning hours — which is why the most vivid and bizarre dreams occur just before waking, in the final REM cycle.

The Horikawa Study: Decoding Dreams

In 2013, Tomoyasu Horikawa and his team at ATR Computational Neuroscience Lab (Kyoto) published a groundbreaking study in Science: they decoded visual dream images in real-time. Three volunteers slept repeatedly inside fMRI scanners across 200 experimental sessions — waking each time they entered hypnagogia (between wakefulness/sleep) and reporting what they saw. Simultaneously, a machine learning algorithm, trained on thousands of images from the ImageNet database, analyzed brain activity patterns in the primary and secondary visual cortex. Result: 60% accuracy in predicting what the dreamer saw — significantly above the 50% chance level. If someone dreamed of a car, the algorithm recognized the “vehicle” category — not color or brand details, but general semantic categories. The study used “multivariate pattern analysis” (MVPA) — recognizing that BOLD patterns in the visual cortex during dreams resemble those during wakefulness for the same objects. The brain essentially "sees" dreams the same way it sees reality.

From Dreams to “Video” Thoughts

After the Horikawa study, technology evolved rapidly. In 2023, researchers at the University of Texas (Alexander Huth's team) used fMRI and large language models (GPT-based) to decode continuous internal speech and narratives — thoughts in continuous text form, without any need for implants. Accuracy reaches 72-82% at the semantic level. Meanwhile, Meta AI developed a system that decodes MEG (magnetoencephalography) signals into text with 73% accuracy in real-time. Elon Musk's Neuralink aims for implantable neural interfaces that will record thoughts, but with direct electrodes in the cortex.

fMRI brain scanner used for dream decoding research in neuroscience laboratory

Why We Dream: The Theories

The memory consolidation theory suggests dreams reorganize information — transferring memories from hippocampus to neocortex. Studies in rodents show that maze neural patterns replay during sleep — mice literally “re-run” the maze in dreams (Wilson & McNaughton, Science, 1994). The threat simulation theory (Revonsuo, 2000) views dreams as an evolutionary tool for practicing dangers — explaining why 60% of dreams contain negative emotions. The overfitted brain theory (Hoel, 2021) proposes that illogical elements function as “training noise” — preventing the brain from overfitting to recent events, just like neural networks use dropout and data augmentation. Lucid dreams — where the dreamer knows they're dreaming — reveal that consciousness can function within sleep. 55% of people have experienced at least one, while 23% experience them regularly. Techniques like reality testing (checking if you're dreaming during the day) and wake-back-to-bed (wake after 5 hours, stay awake 30 minutes, sleep again) increase lucid dream probability to 46%.

Decoding Technology Tools

The main tools: fMRI (high spatial resolution, slow temporal — measures BOLD hemodynamic response), EEG (low spatial, fast temporal — measures electrical pulses), MEG (magnetic fields, non-invasive), ECoG (electrodes on cortex surface — high resolution, but requires surgery). At the University of Bern, the Dresler team trained lucid dreamers to move their eyes in coded patterns during REM — creating bidirectional communication with sleeping subjects (Konkoly et al., Current Biology, 2021). Dreamers correctly answered math questions while asleep — question: “8 minus 6?”, answer: 2 left-right eye movements. This was replicated in four independent labs across three countries, confirming the phenomenon's validity. “Interactive dream research” opens doors to therapeutic interventions within dreams — particularly for PTSD nightmares.

Ethical Issues and Neural Privacy

Mental privacy isn't explicitly protected in any legislation — courts never had reason. Chile (2021) became the first country to establish “neuro-rights” — amending its constitution. Who will have access to your dreams? Could employers, insurers, or courts demand “dream reports”? The implications are enormous: dreams don't reflect intentions — dreaming of crime doesn't indicate criminal intent. UNESCO published neurotechnology guidelines (2023), emphasizing that mental freedom must remain inviolate. Spain and Brazil are studying similar legislative frameworks — the EU is considering neuro-rights within the AI Act. Philosophers warn: dreams are the last refuge of absolute privacy — a space without criticism, censorship, or social pressure.

The Future: When We'll Watch Our Dreams

The convergence of fMRI, AI, and neural interfaces will bring low-resolution dream video within the coming decades. Current systems decode semantic object categories — not details. Future applications: severe PTSD therapy (controlled re-exposure to traumatic dreams), creative inspiration (precise recording of sleep ideas — Kekulé discovered benzene structure in a dream, Paul McCartney composed Yesterday), machine learning training with neural data. The Gallant Lab (UC Berkeley) aims to reconstruct “dream video” from fMRI — using advanced diffusion models similar to Stable Diffusion, trained on brain data. Greece, through the Institute of Neuroscience (NCSR Demokritos), participates in European brain imaging projects. Dreams, for the first time in human history, won't be lost to morning light — and this raises questions that Aserinsky couldn't even imagine in 1953.

Sources:

  • Horikawa, T., Tamaki, M., Miyawaki, Y. & Kamitani, Y. (2013). “Neural Decoding of Visual Imagery During Sleep.” Science, 340(6132), 639-642.
  • Aserinsky, E. & Kleitman, N. (1953). "Regularly Occurring Periods of Eye Motility, and Concomitant Phenomena, During Sleep." Science, 118(3062), 273-274.
Dreams Neuroscience REM Sleep fMRI Brain Imaging Neural Decoding Sleep Research Dream Analysis