Going deep in to brain, Understanding Brain Circuits- MIT research
Researchers are trying to understand the man's most mysterious object - Mind |
How does the human brain interpret other people’s thoughts, make moral judgments, and comprehend belief systems? These questions have traditionally been the realm of philosophers and poets, but researchers at MIT are seeking — and finding — answers through scientific inquiry. So hereby we briefly sail through the recent findings and discoveries at MIT....
Mind is one of the most mind-blowing facts about the entire universe — you get a bunch of cells, wire them up in a network, make them fire in patterns, and you get the human mind.It has many circuits .These are the signal-transmitting networks — formed by an estimated trillion neurons and their quadrillions of connections, or synapses — that carry out our brain function.
Saxe a professor in brain and cognitive sciences department was the first to discover that theory of mind has a specific command center in the brain — the right temporoparietal junction, or TPJ.She used functional magnetic resonance imaging (fMRI) (Wikipedia on fMRI), a scanning technique that reveals brain activity by measuring blood flow. She proved that such a high-level cognitive process could occur in a distinct patch of cortex, rather than involve a constellation of regions or circuits, astonished many neuroscientists.
Saxe also explores social cognition’s role in how we make moral judgments. In another recent study, participants contemplate a scenario in which one person accidentally poisons another. Saxe found that activity in the right TPJ, where theory of mind resides, corresponds with a person’s willingness to forgive someone for causing unintentional harm. “In general, the amount of activity in this particular brain region during moral judgment predicts which moral judgment you make,” she says.
Architecture of Brain
Nancy Kanwisher, a professor at MIT’s McGovern Institute for Brain Research, is enlivening a scientific debate about the architecture of the mind that has gone on for 200 years: Is the brain a general-purpose organ adept at juggling myriad cognitive tasks, or is it modular, with distinct regions specializing in certain functions, such as face recognition or language? She has made a compelling case for the latter theory.
Over the past decade or so, Kanwisher has identified small but distinct regions in the brain dedicated to specific tasks. She is best known for verifying the existence of the fusiform face area (FFA), a region that hones in on and recognizes faces. The finding helped explain why people with damage to this area of the brain cannot recognize faces, even those of people they know well.
“A number of functions, including some that we suspected and
some that we never would have guessed, are really stunningly
specialized,” says
Kanwisher. She and her colleagues were amazed to discover, for example, another
module — the parahippocampal
place area (PPA) — dedicated to processing places.
They also identified a region close to but distinct from the FFA that
specializes in
recognizing bodies.
Drawing Brain Circuits
Programming the Brain Neuron Cells With Light and Genes |
Professor Edward Boyden, who first came to MIT as a teen prodigy and is now a neuroengineer, has developed a first-of-its-kind control system for the brain. Combining genetic and optical engineering, the technology is so precise that it can switch electrical activity on and off in individual neurons, or brain cells. It represents a potentially transformative leap, both in understanding brain function and in treating disorders involving faulty circuitry: Parkinson’s disease, depression, epilepsy, and others. And in the near term, it may serve as a clever tool for reversing certain types of blindness.
Boyden’s technology involves two unusual light-activated proteins produced by microorganisms. One of those proteins, found in green algae, creates a positive charge inside a cell when exposed to blue light. The other, from a bacterium that grows in extremely salty water, creates a negative charge when exposed to yellow light. Boyden, who is also a principal investigator in the McGovern Institute for Brain Research, delivers the genes that encode these proteins to target neurons via harmless viruses. He then exposes those neurons to blue or yellow light in millisecond pulses approximating the speed at which neurons naturally interact. The result? A reliable way to activate and silence specific neural circuits.
This optogenetic system, as it is called, could one day be a dramatically fine-tuned improvement on the implanted electrodes used in a therapy called deep-brain stimulation (DBS). DBS controls the tremors caused by advanced Parkinson’s disease and is being tested in some extreme cases of depression.
And, in a revelation of the tool’s versatility, Boyden recently showed in mice that it can reverse blindness caused by non-functioning photoreceptors — cells in the retina that process light. He implanted his proteins in retinal neurons, which connect to photoreceptors but don’t process light — thereby rendering them light-sensitive. In essence, he converted the neurons to photoreceptors and thus enabled the mice to see.
Years of testing will be required before optogenetics can be used in humans, Boyden notes. Yet researchers in the Media Lab’s new Neuroengineering and Neuromedia Group, which Boyden spearheads, are working quickly. They recently showed that the genes encoding the light-activated proteins can function safely in mammals, without triggering an immune response. They are also developing optical-fiber arrays that can beam pulses of light at specific groups of neurons deep within the brain.
Machine learning
Leslie Pack Kaelbling a faculty leader of the recently launched MIT Intelligence Initiative, which brings cognitive scientists and computer scientists together to explore the nature of intelligence and create machines with more human-like intelligence. The project could propel Kaelbling toward her dream of developing a robot capable of learning and decision-making in many situations.
“Because humans can learn and plan, I firmly believe that I can make a machine do that, too,” she says.Kaelbling’s work focuses on a branch of artificial intelligence (AI) known as machine learning. Her research on computer systems that adapt to a complex, changing environment has contributed to innovations in mobile robotics, as well as programs that help airplanes avoid collisions, provide intelligent assistance on the computer desktop, and help drivers improve fuel economy.
Yet, Kaelbling and her colleagues have faced a metaphoric brick wall when trying to design machines that can perform certain tasks that come naturally to humans but are deceptively complex. For example, she is working on a “seeing” robotic arm that learns to grasp a bottle, first by determining its three-dimensional structure through video from a webcam, then by repeatedly sensing the bottle’s physical properties through touch.Vision remains incredibly difficult,” Kaelbling says in describing the challenges of training machines to accomplish something that a baby can do instinctively.
She envisions “a sidekick that would watch you do things, infer what your strategy is, and assist you accordingly” in myriad tasks, such as tidying your living room or anticipating conflicts in your calendar six months out. Such a machine must combine many intelligent capabilities — vision, dexterity, mobility, maneuverability, language comprehension — in addition to the ability to learn, adapt to new situations, and make decisions. WOW I am gladly waiting for a side kick...
(Source : MIT Spectrum Magazine--- More on Mind and brain will soon follow- Till then preserve the most valuable asset the brain and feed it with some great stuff.... )