Biosignals, Robotics, and Rehabilitation

Bridging the gap between human neurophysiology and intelligent machines

8 min read
A physician positions his head into a special medical display and uses his hands to remote operate a surgical robot, seen in the background of an operating room.

A team of researchers and physicians led by Prof. S. Farokh Atashzar at NYU Tandon is working to change the way we view healthcare with intelligent, interactive robotic and AI-driven assistive machines that can augment human capabilities and break human barriers.

NYU Tandon

This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.


Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration withJohn-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

Portrait of S. Farokh Atashzar smiling at camera with trees in the background.S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A person wearing a head-mount display uses their hand to manipulate a specialized robot control system.A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

NYU Tandon professor S. Farokh Atashzar sits next to a Da Vinci surgical robot. Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.