Project Results

Crowd sourced assessments of skill for cataract surgery were highly correlated with expert assessments. However, correlation is not the same as equivalence. We found that video length (time) was a better predictor of expert score than crowd score. However, by using both variables in combination (crowd score and video length) we were able to derive a conversion factor that very closely predicted expert score.

Crowdsourced Assessment of Surgical Skill Proficiency in Cataract Surgery

Grace L Paley, Rebecca Grove, Tejas C Sekhar, Jack Pruett, Michael V Stock, Tony N Pira, Steven M Shields, Evan L Waxman, Bradley S Wilson, Mae O Gordon, Susan M Culican

About This Project

I am trying to find a cheap, easy and reliable way to grade the skill of surgeons in training. Experts can judge resident surgical skill, but they are slow, have biases, and are expensive. If you watched the video, I'll bet you could tell the difference between the expert and the trainee, even without the labels. We will test the hypothesis that crowd sourcing surgical skill assessment is a cheap, fast, and standardized alternative to expert grading.

Ask the Scientists

Join The Discussion

What is the context of this research?

Despite the widely held belief that only surgeons have the expertise to assess surgical skill, growing evidence contradicts this assumption. When you watch figure skating in the winter Olympics, you probably can tell who deserves the gold medal. That is despite the fact that you were never a figure skater and never trained as a judge. Similarly, we believe that lay raters can determine the quality of a defined surgical maneuver in cataract removal as well as expert graders can. If this is true, frequent grading of surgical skill can be incorporated into training programs in a way not possible with expert evaluators. Furthermore, it permits comparison of resident skill across different training sites since the graders are not constrained by location.

What is the significance of this project?

How do you know that your grandmother's eye doctor is a good cataract surgeon? Surgeon skill is a factor in surgical outcomes, yet there is almost no data describing what level of skill or experience defines surgical competence. This is because we lack practical means to measure surgical skill. Resident surgical skill is decreed by committee of faculty surgeons who have operated with the resident. There is no standardized, objective measure of skill to accompany this determination. Our study will test the feasibility and validity of “crowd-sourcing” evaluations of surgical videos to lay raters as a cheaper, faster, easy and reliable way to measure resident operative surgical skill. If it works, it could be used like an "SAT test" to standardize measurement of skill.

What are the goals of the project?

We will collect videos of cataract surgery by all trainees who operate in the last month of the academic year, including soon-to-be-graduates. Both experts and lay raters will assess surgical skill in the videos using a modified OSATS grading tool. In this cross-sectional sample of residents in varied years of training, we will determine whether lay raters and surgical experts agree on their scores and can discriminate between groups of residents with more or less surgical experience (group differences). To determine if lay raters can detect improvement in surgical skill over time for individual trainees we will assess videos from residents in their final year of training as they progress along the learning curve. We expect lay raters' assessments to equal surgical experts.


Budget

Please wait...

We believe that this research will ultimately lead to better surgeon training and therefore improved patient care. Yet attempts to fund this project through traditional medical science granting agencies that focus on outcomes research have been unsuccessful. However, Dr. Todd Margolis, Chairman of the Dept. of Ophthalmology at Washington University, believes this research is fundamental to improving surgical outcomes. Thus he is providing the pilot funds to contract with CSATS (a fee-for-service company that provides lay rater evaluations of surgical videos) to cover 80 video assessments. CSATS has also agreed to an educational discount on their contract price. Experiment.com funds will be used to support the salary for a student who will edit and upload all of the videos (anticipated 1 hr/video editing, 1 hr/5 videos uploading), and interface with the expert reviewers (3 reviewers). The encrypted hard drives will be distributed to trainees to store surgical videos prior to upload.


Endorsed by

This project is a great idea and will be very useful to the specific field of ophthalmology but will be of value to surgical education in general. Dr. Susan Culican is perfectly positioned to carry out this study as she is expert in this area, has contacts across the world in this area, and most importantly is persistent. We need systems like this to help ensure patient safety for completion of residency and for procedures new to a surgeon after residency. I wholeheartedly endorse this project.
As a program director for ophthalmology residency training and a member of the review committee for residency training programs, I have seen how difficult it is to accurately assess surgical skill. This project will help us in looking for better ways to do this, and Dr. Culican, as a program director and a national leader in ophthalmology education, is perfectly positioned to test this hypothesis. I am happy to provide my strong endorsement of the project.
This novel experiment has the potential to fundamentally change the way we determine surgical competency within ophthalmology. This is an incredibly important and largely unanswered question for ophthalmology, which could have immense value to the entire body of medical specialties. Susan Culican is a renowned expert in ophthalmology surgical training and is uniquely positioned to carry out the this work. I'm very excited to see this work move forward.
It's a longstanding tradition for residents to tape their cases for review. It's also a longstanding tradition for residents to neglect review of their cases after taping them. Formal grading systems are cumbersome and in their attempt to capture details frequently fail to assess the resident for readiness. Review of a case by a faculty member is time consuming. Dr Culican's hypothesis offers the possibility that crowd sourcing by a layperson may be a solution. The budget is modest. There's incredible bang for the buck here.

Project Timeline

May 15, 2017

Project Launched

Jun 30, 2017

Collect all surgical videos for the cross sectional study.

Jul 28, 2017

Edit all collected videos for the cross sectional study and submit to CSATS lay raters and to experts for review.

Oct 13, 2017

CSATS lay rater reviews should be complete shortly after submission. Expert rater reviews for cross sectional study should be complete in 10 weeks (5 reviews per week).

Nov 17, 2017

Complete data analysis on lay rater vs expert scores for cross sectional study.

Meet the Team

Susan Culican
Susan Culican
Professor, Associate Dean of Graduate Medical Education

Affiliates

Department of Ophthalmology and Visual Neurociences, University of Minnesota Medical School
View Profile
Grace Paley
Grace Paley
Resident Physician

Affiliates

Department of Ophthalmology and Visual Sciences, Washington University School of Medicine
View Profile
Jenny Chen
Jenny Chen
Tejas Sekhar
Tejas Sekhar
Research Assistant

Affiliates

Weinberg College of Arts & Sciences, Northwestern University
View Profile

Team Bio

We all currently work and learn at Wash U. Grace is a resident who joined our team from the MD/PhD program at U Penn. Jenny is our Chief Resident hailing from the residency program at UCLA with perspective about how programs are different. Susan is a basic-scientist turned education researcher. Our goal is to bring the same scientific rigor to education that we use to advance science and medicine.

Susan Culican

My research is intended to develop a database of resident experience from which the normal trajectory of skill development can be determined. This dataset will inform milestones that residents should achieve and benchmarks that they must meet prior to entering independent practice. In Ophthalmology, we have 36 months to train residents to independence. As educators we are responsible to the public for the product that we produce. Benchmarks should be “hard stops” in the training program and trigger educational intervention, extension of training or dismissal if not achieved. Only by assuring every resident is achieving the same level of proficiency, regardless of program, can assure that all the physicians that we train are clinically and surgically competent upon graduation from the residency program.

Grace Paley

Default

Jenny Chen

Default

Tejas Sekhar

Tejas is a recent graduate of Northwestern University, where he received a Bachelor of Arts in Neuroscience and English Literature in just three years. He has completed two years of extracurricular kidney & urogenital research with the Nephrology and Urology Departments at Washington University Medical School in collaboration with Barnes-Jewish Hospital previously to joining this project, in addition to an intensive summer conducting molecular biology research with the Silverman Hall for Molecular Therapeutics through the Department of Molecular Biosciences at Northwestern University.


Project Backers

  • 62Backers
  • 107%Funded
  • $4,310Total Donations
  • $53.39Average Donation
Please wait...