Skip to main content Skip to secondary navigation
Page Content

A Powerful AI Tool Could Help Medical Professionals Treat Serious Motor Dysfunction

The inexpensive, easy-to-use system analyzes routine video images for variations in human movement.

Image
Images of skeleton legs walking

Courtesy of Stanford Neuromuscular Biomechanics Lab

An AI system that uses a single camera to analyze movement impairment could help patients with cerebral palsy. 

Anyone who’s seen Avengers, Avatar, or most any blockbuster in the last couple of decades knows the stunning capabilities of today’s special effects. Actors dressed in suits with reflective dots placed at key points of the body are filmed by several cameras synched with powerful computers. The resulting digital images allow computer graphic artists to place the actors into virtually any setting they can dream up.

Such motion-capture tools are used in leading hospitals for the diagnosis of movement dysfunctions in people with cerebral palsy, Parkinson’s disease, stroke, and other debilitating conditions, as well. But these motion-capture tools are so complex and expensive that they remain out of reach of most medical professionals. Soon, that may change.

In a recent paper published in the journal Nature Communications, researchers at Stanford and Gillette Children’s Specialty Hospital in Minnesota announced they have used artificial intelligence and video shot with a single camera to analyze movement impairment in people diagnosed with cerebral palsy. The inexpensive, easy-to-use, and open-source system stands to democratize the study of neurological and musculoskeletal disorders and help doctors better identify, measure progress, and treat these disorders.

Powerful and Inexpensive

“Our system puts remarkable diagnostic capabilities within reach of virtually every neurological and orthopedic clinic in the country,” says Scott Delp, a professor of mechanical engineering, bioengineering, and orthopedic surgery at Stanford who helped lead the research team.

“This study is just the tip of the iceberg. The opportunities for expanding this approach to different clinical populations and additional important metrics is really limitless,” says Michael Schwartz of the Department of Orthopedic Surgery, University of Minnesota, and the Gillette Children’s Specialty Healthcare Center for Gait and Motion Analysis, the study’s senior author.

The science of “gait analysis” evaluates factors like walking speed, cadence, and symmetry in three dimensions. Measuring such complex motion using two-dimensional, single-camera footage has, so far, been impossible. With the aid of artificial intelligence and a database of more than 3,000 videos of patients, however, they have found a way. The team can quantify and precisely measure variations in patient movement to diagnose severity of impairment, devise treatments, and keep tabs on rehabilitation progress or decline in function.

“A cost-efficient, single-camera technology like this should enhance and expand the reach of clinical practice and enable large-scale clinical studies that simply weren’t possible before,” says Lukasz Kidziński, a Mobilize Center distinguished postdoctoral fellow working in Delp’s Neuromuscular Biomechanics Lab and the study’s lead author.

Evolving Field

The key development was the evolution of deep neural networks over the past several years. Neural networks are an area of artificial intelligence in which computer systems modeled on the human brain study large datasets to discern complex, sometimes surprising patterns that humans often cannot detect.

The algorithm first learns what a typical gait looks like and then measures deviations in a patient video by predicting where the patient’s knees, hips, ankles, and other body parts should be in the image.

Prior to 2014 or so, the process of cataloging all those knees, ankles, hips, and toes in each frame of video had to be done by teams of trained engineers, by hand. It was labor intensive and more sophisticated even than the Hollywood-like motion capture systems. Neural networks can now do it automatically, if fed enough data to learn from, Kidziński explains.

“If you give the computer a lot of examples, they can locate the key points like elbows, knees, and ankles,” Kidziński says.

For detection of key points, the team turned to OpenPose, developed at Carnegie Mellon University, which used 1.5 million annotated images of humans in motion. They used OpenPose to detect key points in some 1,700 single-camera videos of patients with cerebral palsy recorded at Gillette Children’s Specialty Healthcare and built a neural network to estimate gait parameters such as walking speed or severity of walking impairment based on trajectories of key points.

Their new algorithm estimates the motions of ankles, knees, hips, heels, the pelvis, and other body parts in each frame of patient video and calculates the variation in actual patient movement from an unimpaired baseline model.

The team says its methods are “dramatically” lower in cost and time than existing motion-capture systems and require no special equipment or training to operate. While the current method requires a three-hour patient visit and another hour of data processing time, the new approach produces results in under a minute.

“A model like this can help clinicians assess early symptoms of neurological disorders and enable low-cost surveillance of disease progression,” says Delp. Parents could take a video of their child and send it in for analysis, saving a trip to the clinic, he offers.

“It also gives us the opportunity to learn about cerebral palsy from a much larger and more representative patient population,” Schwartz adds.

There are limitations, however. First, the recording protocol must be closely followed, using similar camera angles and patient attire. Second, the application is based only on videos taken from the side, making analysis in other planes — like front- or rear-on videos — a challenge. The researchers note, however, that a similar artificial intelligence framework used in this research could yield models that work in those other planes — issues they hope to address in future iterations.

“This is a significant leap forward from controlled laboratory tests and allows virtually limitless repeated measures and patient tracking over time,” Delp says.

The application is open source and freely available to any interested researchers. Watch a live demo here. Scripts for training machine-learning models and results analysis, and the code used for generating all figures, are available here, and an anonymized video dataset is available here.

Other contributing authors include Stanford senior research engineer Jennifer Hicks, graduate student Apoorva Rajagopal, and master’s student Bryan Yang.

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

 

More News Topics

Related Content