About This Project
Our voice carries our health condition long before symptoms appear. I will build a soft-robotic larynx that mimics voice. A high-speed camera, sensors, and motorized stages will control and measure the model, which will reproduce healthy and diseased phonation. Data will clarify how tissue motion creates sound and support early-warning biomarkers, e.g., for Parkinson’s. I hypothesize that the soft-robotic larynx will predict vocal qualities and early pathology which are not measurable in humans.
Ask the Scientists
Join The DiscussionWhat is the context of this research?
Human phonation emerges from airflow from the lungs that drives vocal fold vibrations (80-400 Hz), which we perceive as voice. Clinical recordings suggest changes in phonation can precede diagnoses such as Parkinson’s, but causative biomechanics are difficult to test in vivo. I propose a synthetic soft-robotic larynx setup with tunable tissue and programmable motion, enabling repeatable experiments across sex-specific geometries and disease-like asymmetries. We will independently control subglottal pressure, vocal fold tension, airflow, and glottal shape, and quantify outcomes via high-speed imaging and pressure/acoustic sensing. In vivo studies face ethics limits, ex vivo tissues degrade, and computational models lack validation. Crucially, by parameterizing vocal mechanisms, measuring spatio-temporal vibration patterns and controlling laryngeal dynamics, I will test the prediction of distinct kinematic signatures reliably forecast specific vocal qualities and early pathology markers.
What is the significance of this project?
Voice disorders affect millions and limit communication, yet we lack measurements that tie specific tissue motions to the sounds of our voice. An automated soft-robotic larynx makes this link testable: we can impose controlled asymmetries, tremor, stiffness loss, or reduced drive and observe vocal fold motion. These will refine theory, guide clinical voice exams, and power data-driven early-warning biomarkers—e.g., detecting Parkinson's-related hypophonia or tremor before neurological signs. Clinically, the platform enables in vitro rehearsal of diagnosis and therapy: we can prototype surgical strategies, optimize augmentation or implant parameters, and create patient-specific settings. It can support pre/post surgery planning and outcome tracking for gender-affirming surgery by exploring pitch targets while preserving vocal health. Beyond translation, the dataset and open hardware/control code will accelerate replication and fundamental understanding of phonation mechanics.
What are the goals of the project?
The first goal is to assemble the measurement setup, including a NI cDAQ with modules, Standa motorized stages with a controller, a Chronos high-speed camera with a lens and calibrated microphones/pressure sensors.
The second goal is to develop control and acquisition software for synchronized, control of motors, airflow, and recordings.
The third goal is to use the larynx to generate and systematically vary phonation. We will map stable regimes by tuning glottal shape, tissue stiffness and drive, then induce pathological dynamics and record synchronized high-speed video, pressure and audio.
The fourth goal is to validate and quantify the model’s behavior by computing glottal and acoustic, metrics and comparing them with existing clinical data. This will confirm that the synthetic larynx reproduces expected laryngeal dynamics and vocal fold oscillation patterns, supporting early-biomarker discovery.
The fifth goal is to draft a manuscript and submit it to a peer-reviewed journal.
Budget
My work relies on three integrated subsystems: (1) motion and actuation, (2) imaging and sensing, and (3) synchronized data acquisition. Multi- and single-axis controllers coordinate rapid, repeatable gestures across seven linear and two rotary stages, reproducing laryngeal dynamics (adduction/abduction, elongation, arytenoid cartilage rotation). A high-speed camera with suitable optics captures glottal dynamics; microphones and pressure transducers quantify acoustic output and subglottal pressures. A cDAQ chassis with modules supplies a common, hardware-timed clock and triggers. Each line item buys an experimental capability: deterministic multi-Degree-of-Freedome (DoF) control, quantitative imaging, and causal timing. I have already paid out-of-pocket for most of the project’s core pieces: casting molds for the larynx, the materials, and the muscle-actuation hardware and control software, motorized stages and cDAQ components. The soft-robotic larynx ready, to be measured.
Endorsed by
Project Timeline
In December 2025, I will assemble the rig, and test the setup, to ensure, everything is synchronized properly. From January to February 2026, I will finalize the control and measurement software. From March to May, I will perform “healthy phonation” measurements and validate metrics. From June to August, I will simulate “pathologies” and collect data. From September to October, I will analyze, map to clinical metrics, and release the open dataset/code. In November 2026, I will submit the paper.
Nov 05, 2025
Project Launched
Dec 10, 2025
Ordering components & assemble rig
Feb 10, 2026
Setting up control & DAQ software
Mar 19, 2026
Run experiments
Sep 22, 2026
Data Analysis
Meet the Team
Bogac Tur
I am a mechanical engineer and voice scientist building soft-robotic models to reveal how the human voice works. I earned my PhD in 2025 with a dissertation on synthetic larynx design, multiphase flow, and acoustics. I am a postdoctoral researcher at the University Hospital Erlangen (Germany), between positions. My next step is to join Harvard Medical School (SimonyanLab) to pursue computational neuroscience with a focus on voice, aiming to connect neural control with phonation mechanics.
My research combines programmable, soft-robotic larynx models with high-speed imaging, airflow/pressure sensing, and closed-loop control. These in vitro platforms let me impose precise changes—stiffness asymmetry, tremor, paresis—then quantify their acoustic fingerprints. By linking kinematics, aerodynamics, and sound, I work toward biomarkers for early detection and monitoring of neurological disease (e.g., Parkinson’s). The same tools enable clinically relevant rehearsal and optimization: pre-/post-operative planning for voice care, tuning injection or implant parameters, and testing therapy strategies while safeguarding vocal health. To accelerate the field, I self-funded much of the foundational hardware (molds, materials, and synthetic muscles) so experiments could progress without delay.
I have authored 17 peer-reviewed journal articles and presented results at multiple international conferences. Across projects I emphasize rigor, transparency, and open science—sharing designs, code, and datasets so others can replicate and build upon this work. My long-term vision is to unite neurology and voice: to understand how the brain shapes phonation across health and disease, and to translate that understanding into objective diagnostics and personalized interventions. By integrating synthetic physiology with computation and clinical collaboration, I aim to make the voice a window into the nervous system—and a target for timely, effective care.
Lab Notes
Nothing posted yet.
Project Backers
- 7Backers
- 35%Funded
- $3,350Total Donations
- $478.57Average Donation


