About This Project
Currently there is no standardized measure of surgical skill in Ophthalmology. Our study tests the feasibility of “crowd-sourcing” evaluations of surgical videos by lay raters as a low burden, low cost, objective and reliable method to measure resident surgical skill. If lay raters can grade surgical technique as accurately as experts, skills assessments can be quantified and standardized to define a "competence" threshold.
Ask the ScientistsJoin The Discussion
What is the context of this research?
How do you know that your grandmother's eye doctor is a good cataract surgeon? Surgeon skill is a factor in surgical outcomes, yet there is almost no data describing what level of skill or experience defines surgical competence. A major barrier to defining competence is the lack of practical means to measure surgical skill. Current grading tools available to surgical teachers are time consuming, onerous to fill out, and expensive (time spent filling out forms is time not spent seeing patients). Our study will test the feasibility and validity of “crowdsourcing” evaluations of surgical videos to lay raters as a cheaper, faster, easy and reliable way to measure resident operative surgical skill.
What is the significance of this project?
Despite the widely held belief that only surgeons have the expertise to assess surgical skill, growing evidence contradicts this assumption. When you watch figure skating in the winter Olympics, you probably can tell who deserves the gold medal. That is despite the fact that you were never a figure skater and never trained as a judge. Similarly, we believe that lay raters can determine the quality of a defined surgical maneuver in cataract removal as well as expert graders can. If this is true, frequent grading of surgical skill can be incorporated into training programs in a way not possible with expert evaluators. Furthermore, it permits comparison of resident skill across different training sites since the graders are not constrained by location.
What are the goals of the project?
We will collect videos of cataract surgery by all trainees who operate in the last month of the academic year, including soon-to-be-graduates. Both experts and lay raters will assess surgical skill in the videos using a modified OSATS grading tool. In this cross-sectional sample of residents in varied years of training, we will determine whether lay raters and surgical experts agree on their scores and can discriminate between groups of residents with more or less surgical experience (group differences). To determine if lay raters can detect improvement in surgical skill over time for individual trainees we will assess videos from residents in their final year of training as they progress along the learning curve. We expect lay raters' assessments to equal surgical experts.
We believe that this research will ultimately lead to better surgeon training and therefore improved patient care. Yet attempts to fund this project through traditional medical science granting agencies that focus on outcomes research have been unsuccessful. However, Dr. Todd Margolis, Chairman of the Dept. of Ophthalmology at Washington University, believes this research is fundamental to improving surgical outcomes. Thus he is providing the pilot funds to contract with CSATS (a fee-for-service company that provides lay rater evaluations of surgical videos) to cover 80 video assessments. CSATS has also agreed to an educational discount on their contract price. Experiment.com funds will be used to support the salary for a student who will edit and upload all of the videos (anticipated 1 hr/video editing, 1 hr/5 videos uploading), and interface with the expert reviewers (3 reviewers). The encrypted hard drives will be distributed to trainees to store surgical videos prior to upload.
May 15, 2017
Jun 30, 2017
Collect all surgical videos for the cross sectional study.
Jul 28, 2017
Edit all collected videos for the cross sectional study and submit to CSATS lay raters and to experts for review.
Oct 13, 2017
CSATS lay rater reviews should be complete shortly after submission. Expert rater reviews for cross sectional study should be complete in 10 weeks (5 reviews per week).
Nov 17, 2017
Complete data analysis on lay rater vs expert scores for cross sectional study.
Meet the Team
We all currently work and learn at Wash U. Grace is a resident who joined our team from the MD/PhD program at U Penn. Jenny is our Chief Resident hailing from the residency program at UCLA with perspective about how programs are different. Susan is a basic-scientist turned education researcher. Our goal is to bring the same scientific rigor to education that we use to advance science and medicine.
My research is intended to develop a database of resident experience from which the normal trajectory of skill development can be determined. This dataset will inform milestones that residents should achieve and benchmarks that they must meet prior to entering independent practice. In Ophthalmology, we have 36 months to train residents to independence. As educators we are responsible to the public for the product that we produce. Benchmarks should be “hard stops” in the training program and trigger educational intervention, extension of training or dismissal if not achieved. Only by assuring every resident is achieving the same level of proficiency, regardless of program, can assure that all the physicians that we train are clinically and surgically competent upon graduation from the residency program.
- $1,085Total Donations
- $155.00Average Donation