NAVmetrics Initial Study Results Begin to Shed Light on Value of Navigation Metrics

June 2020 Vol 11, No 6

Several years ago, the Academy of Oncology Nurse & Patient Navigators (AONN+) unveiled 35 evidence-based oncology navigation metrics developed in response to a void in the literature measuring the programmatic success of navigation programs. These baseline metrics are vital to coordinating high-quality, team-based care and demonstrating the sustainability of navigation programs.

At the AONN+ 10th Annual Navigation & Survivorship Conference, members of the AONN+ Standardized Metrics Task Force presented the highly anticipated quantitative results of their pilot study entitled “National Evidence-Based Oncology Navigation Metrics: Multisite Exploratory Study to Demonstrate Value and Sustainability of Navigation Programs,” or “NAVmetrics” for short. According to the team, evidence supports the need for heterogeneity with navigation measurements, and these results are beginning to shed light on the accessibility of various metrics once they are actually put into practice at a handful of oncology navigation programs across the United States.

“As evidence guides practice, it has become essential for navigation programs to identify and capture core metrics and standardize their data collection to demonstrate program outcomes,” said Elaine Sein, BSN, RN, study investigator, consultant, and past AONN+ Leadership Council member. “So it was our study team’s goal to be able to implement 10 of these 35 metrics that we published in 2017, and to gain an understanding of what the difficulties would be with implementation, and how we could best support people to actually implement them.”

NAVmetrics, designed in collaboration with the American Cancer Society and Chartis Oncology Solutions, went live on November 1, 2018, at 8 pilot sites (both academic and community institutions). The study sites captured and reported on data from 10 of the AONN+ standardized metrics over a 6-month period.

Metrics data were uploaded into the ONC iQ NAVmetrics cloud-based business intelligence platform to create participant-specific dashboards. Prior to the study launch, all sites also submitted 3 years of historical data (as available) to create a baseline for the study data.

“We have seen the impact of navigation improve cancer care in all of the communities we’ve worked in,” said Kelley D. Simpson, MBA, study investigator and Director and Partner at The Chartis Group. “So it’s not lost on me that this is a foundational study, because I know that every day you all struggle with justifying the role that navigators and care team members play.”

Why These 10 Metrics?

According to Ms Sein, the committee chose 10 metrics that complement the domains for navigation certification.

Two of the metrics fall under the domain of Coordination of Care/Care Transition: barriers to care and time to diagnosis. Another 2 metrics are under the domain of Operations Management/Organizational Development/Health Economics: measuring navigation caseload and measuring the number of readmissions among navigated patients at 30, 60, and 90 days.

Under the domain of Psychosocial Support Services/Assessment are 2 more metrics: psychosocial distress screening and social support referrals.

The remaining metrics fall under the domains of Survivorship/End of Life (measuring palliative care referrals), Patient Advocacy/Patient Empowerment (identifying patient learning styles at initial visit), Professional Roles and Responsibilities (measuring navigation knowledge at time of orientation), and finally, Research/Quality/Performance Improvement (measuring patient experience/satisfaction with care).

“The reason we chose these is because AONN+ really wanted to be able to dovetail our metrics with national indicators and national standards,” she said. “We chose these 10 to start with because many institutions that are already part of accrediting bodies are capturing many of these already, so we thought it would be kind of an easy way to get started.”

The Quantitative Study Outcomes

“We wanted this study to be engaging with each of the 8 participating sites,” said Alex Glonek, study investigator, ONC iQ developer and lead consultant at The Chartis Group.

The study team started by visiting each of the sites and meeting with the navigation teams that were going to be involved. “We really tried to understand where they were at from a data perspective, and what needs we could help them meet,” he said.

Each facility designed the most appropriate method to meet their organizational needs. This involved data mapping, as well as aligning their existing navigation system or electronic medical record (EMR) with the study goal of capturing the 10 data metrics. Depending on the level of sophistication, each site was guided toward using an EMR poll or the NAVmetrics platform (approximately 41% of the study data were entered directly into the NAVmetrics platform).

Metric 1: Barriers to Care

The team identified 2.2 barriers per patient across the 8 study sites. Compared with the historical baseline of 2.4 barriers per patient, this finding was roughly in line with that from the previous year.

The total number of barriers was just over 10,000, and these were broken into 6 categories: practical, emotional, physical, financial, family, and spiritual. The most commonly reported of these were practical (34%), emotional (28%), and physical (25%).

The top reported barriers were transportation, worry, work, nervousness, fatigue, pain, loss of income, treatment decisions, and fear of treatment/side effects.

Metric 2: Time to Treatment

Average time from diagnosis to treatment (defined as a surgical encounter or initial treatment with chemotherapy or radiation) was 43 days. However, navigated patients experienced shorter wait times between diagnosis and treatment and received treatment 11 days sooner than non-navigated patients, on average.

“To the extent we could continue to validate that finding, it would certainly be valuable from a patient standpoint, and potentially from the standpoint of cost implication for navigation,” said Mr Glonek.

They also found that specific barriers contributed to delays in treatment, with practical barriers causing the biggest delays to treatment (average 56 days).

Metric 3: Caseload

“This is an important metric because we don’t have good benchmarks, and we really need them,” said Tricia Strusowski, MS, RN, study investigator, Manager at The Chartis Group and Chair of the AONN+ National Metrics Committee.

“The intent of this metric was around level of effort,” added Mr Glonek. “We tried to track open and closed cases, ‘open’ being someone you’re actively engaged with from a navigation standpoint, and ‘closed’ being someone more inactive. This language was a point of discussion throughout the study, and it’s seen in the results in terms of how few cases we were able to close relative to open cases.”

They received consistent data over the 6-month period and tracked an average of 88 cases per navigator. However, there was some variability between performance at individual sites.

According to Ms Strusowski, confusion at some sites about the definition of a “closed” case led to cases staying open longer than they should. “Then you have this huge overwhelming volume, and you don’t have the opportunity to be as proactive as you want with your new patients or with the existing patients who really need additional services from you,” she noted.

After seeing the survey results, the team decided to change the definitions to “active” and “not active” patients. They also found that creating a dashboard of caseload volumes was helpful to the navigation teams.

Metric 4: Readmissions

In general, relative to baseline, the investigators saw a slight decline in readmission rates at 30, 60, and 90 days during the study period.

But when they compared navigated to non-navigated patients, they found higher readmissions among navigated patients. “So maybe the navigator is not directly responsible for the readmission, or perhaps there’s a complexity issue in terms of which type of patients are navigated,” said Mr Glonek. “This is something we’ll continue to look at as we move forward with the metrics.”

Metrics 5 and 6: Psychosocial Distress and Social Support Referrals

Of the almost 2000 patients in the study, 42% had a psychosocial distress screening. About one-third had a distress score of 0, according to the National Comprehensive Cancer Network Distress Thermometer, but most resided between 2 and 5.

“It wasn’t surprising to us that more of the rare and complex cancers like leukemia and lymphoma were a higher percentage of the overall screening group,” Ms Simpson reported.

The study team found an average of 0.4 social support referrals per patient. “I found this to be pretty low,” she added. “But when we looked at those across disease types, ‘breast’ and ‘GI other’ rose to the top, with about 0.7 referrals per patient.”

Most referrals were internal to other care team members, with about 25% being referrals to community resources and other external sources.

According to Ms Strusowski, distress screening volumes have increased since the pilot project, and many of the sites plan to continue this process after the study. One site bought iPads to help their patients complete distress screening, while another initiated a Lean Six Sigma process and process mapping to share with their multidisciplinary team.

Metric 7: Palliative Care

“This was an interesting one to track because of variance amongst our participant sites,” said Ms Simpson. “At many of the sites, the navigator could not make a direct referral to palliative care. It had to be done through an MD or APP.”

They still encouraged sites to track referrals, even if the referral was to a physician who would then refer the patient to palliative care.

Some sites had a stronger relationship to palliative care and therefore a smoother referral process; the percentage of patients referred per site ranged from 0% to 68%.

“Palliative care takes a bit more work; it’s not just a navigator activity or core competency,” said Ms Strusowski. “But as we start to look at value-based cancer care, we need to start referring patients to palliative care earlier in the continuum.”

She added that this particular performance improvement was valuable to the navigation teams, and many reported being significantly more proactive about making palliative care referrals.

Metrics 8 and 9: Learning Styles and Competencies

A total of 3219 learning styles were identified, with an average of 0.7 per patient. According to Ms Simpson, the bulk of these learning styles were visual and verbal, with aural, social, and physical making up the balance.

More learning styles were identified in the more rare and complex cancer categories. “Maybe more education is done with some of those rare and complex tumors,” she noted.

In terms of navigator competency, 7 of the 8 sites had a navigator-specific checklist, whereas only 1 of the 8 sites had a general oncology checklist built into the navigator checklist. There was significant variation across sites regarding orientation checklists, and average onboarding time was about 60 days.

Across sites, tools used to support training included the Oncology Nursing Society Core Competencies, the AONN+ 8 knowledge domains and modules, and the George Washington University Cancer Institute Patient Navigation Training. The study team was also able to validate that every site had an annual competency evaluation process.

Metric 10: Patient Satisfaction

These were quantitative findings about how much patients felt their navigators helped them throughout their cancer journey, how often they communicated, and whether they felt their navigator encouraged them to participate in shared decision-making.

Patients were asked 8 questions related to their satisfaction, and across the board, satisfaction was high. For example, almost 75% of patients answered “a lot” or “some” in response to the question, “How often has your navigator reviewed treatment options?” Ninety percent said “yes” when asked, “Did a navigator encourage you to participate in decisions about treatment?” And over 80% responded “yes” when asked, “Since diagnosis, did a navigator ask your goals for treatment?”

The majority of patients gave their navigators an overall rating of 7 to 10, with a large majority giving their navigators a 10 rating. “We would of course like these to be at 100%,” added Ms Simpson. “But these were very high scores.”

Related Articles
Demonstrating the Impact of Navigation Through the AONN+ Metrics: AONN+ Metrics – Helping Navigators Show Value for 5 Years and Counting
Lillie D. Shockney, RN, BS, MAS, HON-ONN-CG
|
May 2022 Vol 13, No 5
Have you considered implementing metrics to demonstrate just how impactful your navigation program is?
Clinical Outcome as a Measurement of Successful Navigation: How to Measure the Impact of Your Navigation Program on Clinical Outcomes
May 2022 Vol 13, No 5
In recognition of the 5th anniversary of the AONN+ Metrics, we present a review of the metrics designed to measure the impact of navigation on clinical outcomes.
Patient Experience as a Measurement of Successful Navigation: Why the Patient Experience Matters, and How You Can Effectively Measure It
March 2022 Vol 13, No 3
AONN+ Metrics specific to patient experience can help you to measure the success of your navigation program.
Last modified: August 10, 2023

Subscribe Today!

To sign up for our print publication or e-newsletter, please enter your contact information below.

I'd like to receive:

  • First Name *
    Last Name *
     
     
    Profession or Role
    Primary Specialty or Disease State
    Country