How do we learn words from speech?

$1,710
Raised of $10,000 Goal
18%
Ended on 11/01/13
Campaign Ended
  • $1,710
    pledged
  • 18%
    funded
  • Finished
    on 11/01/13

About This Project

Babies do it. Teenagers do it. Adults even do it (though maybe with a little more effort). Humans have the amazing capacity to learn and comprehend language: sometimes several! In order to begin to understand spoken language, listeners of any age must first segment the continuous and complex speech stream into individual words. Unlike written language, which cues word boundaries with white space, no such unambiguous or consistent cue to word boundaries exists in spoken language. ItwouldbeasthoughwehadtofindthewordslikethisallthetimeIt'swaymoredifficult!

Yet, current research cannot definitively answer the question “How do people segment words from continuous speech?” This research aims to understand how patterns of pitch and time in speech facilitate word segmentation by having students at our university listen to a made-up language and learn words.

Ask the Scientists

Join The Discussion

What is the context of this research?

How does the human brain respond to speech patterns? What does that response indicate about how people segment spoken language?

We have compelling behavioral evidence that pitch and temporal information influence how people learn words in a language. Yet we cannot know for sure what cues people use to do this without real-time measurements of how people's brains respond as they're listening to speech.

The next step of our research requires us to purchase equipment that records the electrical activity of the brain in order to answer our overarching research question. Neural activity while listening to speech will reveal what cues in the acoustic signal people use to segment words.

What is the significance of this project?

This research has implications for how babies learn their first language, or how adults learn a 2nd (or 3rd, 4th, 5th...) language. When learning a language, listeners can make use of a rich world of cues by immersing themselves in that language environment. Our research provides scientific evidence for the intuition that although daunting, moving to Spain will help you learn Spanish better than learning from a textbook.

Although asking people to make judgments about speech and seeing how pitch and time cues impact their responses tells us whether pitch and time information is used during language learning and comprehension, our further research incorporating EEG recording will dig deeper, illuminating precisely how and when these cues are used.

Overall, this research is important for understanding the perceptual and neural mechanisms underlying processing of pitch and time cues in language and how these cues contribute to language learning and comprehension. Previous research has revealed a number of cues that facilitate speech segmentation and word learning (such has the probability of co-occurrence of syllables) but few have addressed the role of patterns of pitch and time in these processes.

In addition, learning about these underlying perceptual and neural mechanisms will potentially inform us about pitch and temporal processing deficits linked to neurological disorders -- such as dyslexia, autism, stuttering -- and furthermore could shed light on how less efficient processing of these cues with old age or degraded hearing may influence speech understanding abilities.

What are the goals of the project?

All funds from this campaign will be used to help purchase EEG equipment.

Electroencephalography, or EEG, is a fast, convenient, and non-invasive method of recording the electrical activity of the brain at the scalp. This technique is well-tailored to answering research questions that require information about when a neural process is occurring. Because we are particularly interested in how the brain responds to events that occur fairly quickly, such as the perception of a spoken word (which occurs over milliseconds), we want to “image” activity in the brain as it takes place. EEG enables us to see how the brain is responding to an event within milliseconds.

Here's a picture of our undergraduate research assistants learning to use some EEG equipment

Funding this equipment is a collaborative effort. Our lab along with a couple of other labs in the Cognition and Cognitive Neuroscience (CCN) area at Michigan State University (MSU) will be paying out of pocket for whatever we cannot manage to raise by crowdfunding.

In addition to specifically helping our lab to achieve our own research goals, this equipment will be accessible to other members of our research community at MSU for research and training in a CCN-community EEG facility. This facility will bring researchers from different areas of psychology together, encouraging more collaboration and creating a melting pot of scientific perspectives, skills, and ideas, benefiting both researchers and the community in general.

For more information about how we graduate students (Elisa and Katherine) are funded, and the various sources of funding for different projects in the TAP lab, check out our website or feel free to email to ask!

Budget

Please wait...

Our initial goal amount of $10,000 will be put towards an EEG acquisition system. This system (i.e., the Biosemi Active2 Base) consists of several custom-made electronic components that convert the analog signal measured by sensors on the scalp to a digital code that can be read out and saved by a computer.There are also incidental expenses involved in running a participant (gel that facilitates sensor function, syringes to insert the gel, special stickers to attach some sensors to participant’s skin…) that this money will go towards.
See TAP lab members learning to use equipment here.
The analog-to-digital box is essential to EEG operation. This is what it looks like.
There are also incidental expenses involved in running a participant (gel that facilitates sensor function, syringes to insert the gel, special stickers to attach some sensors to participant’s skin…) that this money will help with.
First reach goal: $15,000
The additional $5,000 in our reach goal will allow us to put a little more towards the basic EEG acquisition system, as well as enable us to fund the sensors required to actually run the experiment! Without this, we're still borrowing some equipment from our collaborators.
The sensor set consists of a cable with 32 pin-type sensors, which measure neural activity; flat external sensors which measure eye movements and facial muscle activity, and two special sensors that allow the system to operate with a higher impedance than other EEG systems.
Second reach goal: $20,000
This will enable us to fund even more of the basic EEG acquisition system, as well as enable us to purchase a special cap, which is necessary in order to collect data.
The cap (pictured here) is custom-made for use with the EEG system: it is a snug elastic cap with plastic ports that hold the sensors in place over specific locations on the head.
BEYOND! $$$
If we get more than $20,000, we’ll be very happy!Anything beyond $20k will first go towards the acquisition system (money that doesn’t have to be paid out of pocket by lab principle investigators is money available for other research, paying participants, or paying hard-working graduate students, amongst other worthy causes).
If we have any more leftover, we will purchase additional cap sizes or sensors.The sensors in the first reach goal provide us with information about neural activity at 32 scalp sites.If we have enough money, we can also purchase a 64-channel sensor cable.Some kinds of analysis can only be done with the additional information provided by the additional sensors.Also, this is sensitive electronic equipment that can break and need to be sent back to the manufacturer for repairs.Having a second set of sensors to fall back on would mean that research can always move forward!
The single cap in the second reach goal is a size medium, which fits most people, but not all.Having caps in additional sizes will help us include more participants in our research and gather better data.

Endorsed by

Great project that is likely to yield valuable new insights about speech perception. The new equipment will be a valuable resource to many in the Cognitive Science Program at Michigan State University.
This is an exciting effort from terrific graduate students. The equipment will not only support research in cognitive neuroscience at MSU, but will be critical to the training at many levels.

Meet the Team

Elisa Kim Fromboluti
Elisa Kim Fromboluti
M.A.

Affiliates

Timing, Attention, and Perception Laboratory, Cognition and Cognitive Neuroscience Area, Department of Psychology, Michigan State University Cognitive Science Program, Michigan State University
View Profile
Katherine Jones
Katherine Jones
Graduate Student

Affiliates

Michigan State University, Department of Psychology
View Profile

Team Bio

Elisa and Katherine are both graduate students in the Timing, Attention and Perception (TAP) lab at Michigan State University.

Here in the TAP lab, we study how the brain uses and responds to timing and rhythm; for example, how are timing and rhythm involved in attention, motor coordination, and the perception of speech and music?

Research into basic questions of how most people perceive or tap along to a beat, or how duration, pitch, and intensity cues in music or speech influence perception helps shed light on neurological disorders involving deficits in music and speech processing, in addition to furthering understanding of how these types of cues affect how (and when) we pay attention to the world around us.

Elisa Kim Fromboluti

Elisa and Katherine are both graduate students in the Timing, Attention and Perception (TAP) lab at Michigan State University.

Here in the TAP lab, we study how the brain uses and responds to timing and rhythm; for example, how are timing and rhythm involved in attention, motor coordination, and the perception of speech and music?

Research into basic questions of how most people perceive or tap along to a beat, or how duration, pitch, and intensity cues in music or speech influence perception helps shed light on neurological disorders involving deficits in music and speech processing, in addition to furthering understanding of how these types of cues affect how (and when) we pay attention to the world around us.

Katherine Jones

Hello! I am currently pursuing a PhD in Psychology at Michigan State University, and working in Dr. Devin McAuley's Timing Attention and Perception (TAP) Lab. I am broadly interested in auditory perception, and I am pursuing research on the relationship between music and language perception. In particular, I am interested in how pitch and rhythm in speech facilitate language comprehension and word learning; furthermore I aim to contribute to the understanding of the underlying neural mechanisms involved in language learning and real-time language processing.

Additional Information

Relevant data from the behavior-only version of this project has not been published yet. Check out our lab notes for posters we've presented at conferences, updates on the manuscript, and updates on our CCN-community EEG facility.

If you want really up-to-date information or you're interested in other fun research related to rhythm, language, and music perception by our lab like us on facebook, follow us on twitter, or visit our website.

Facebook: https://www.facebook.com/pages/TAP-Lab-Michigan-State-University/134714169959003?fref=ts

Twitter: @MSUTAPLAB

Email: MSU.TAP.LAB@gmail.com

Website: http://psychology.msu.edu/TAPlab/


Project Backers

  • 28Backers
  • 18%Funded
  • $1,710Total Donations
  • $61.07Average Donation
Please wait...