One outstanding difference between Homo sapiens and other mammals is the ability to perform highly complex cognitive tasks and behaviors, such as language, abstract thinking, and cultural diversity. How is this accomplished? According to one prominent theory, cognitive complexity is proportional to the repetition of specific computational modules over a large surface expansion of the cerebral cortex (neocortex). However, the human neocortex was shown to also possess unique features at the cellular and synaptic levels, raising the possibility that expanding the computational module is not the only mechanism underlying complex thinking. In a study published in PLOS Biology, Szegedi and colleagues analyzed a specific cortical circuit from live postoperative human tissue, showing that human-specific, very powerful excitatory connections between principal pyramidal neurons and inhibitory neurons are highly plastic. This suggests that exclusive plasticity of specific microcircuits might be considered among the mechanisms endowing the human neocortex with the ability to perform highly complex cognitive tasks.
This blog reports new ideas and work on mind, brain, behavior, psychology, and politics - as well as random curious stuff
Wednesday, February 01, 2017
Are human-specific plastic cortical synaptic connections what makes us human?
I want to pass on an excellent primer (open source) on the plasticity of a specific synapse between pyramidal neurons and fast-spiking interneurons of the human neocortex observed only in our human brains.
Posted by Deric Bownds at 3:00 AM
Blog Categories: animal behavior, brain plasticity
Subscribe to: Post Comments (Atom)
No one thing makes us human. What makes us human is everything about us taken together.ReplyDelete
I read an interview with Geoff Hinton recently where he said that some AI problems will require new types of neurons. These guys are able to thrash their virtually networks to find out what they can do and have come to the conclusion that it's not just size.ReplyDelete
"One problem we still haven’t solved is getting neural nets to generalize well from small amounts of data, and I suspect that this may require radical changes in the types of neuron we use."