Keith Lomurray, Author at Kyruus Health https://kyruushealth.com/author/klomurray/ The Care Access Platform Thu, 18 Apr 2024 18:15:52 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://eh6327rse7k.exactdn.com/wp-content/uploads/2024/01/cropped-android-chrome-512x512-1.png?strip=all&lossy=0&resize=32%2C32&ssl=1 Keith Lomurray, Author at Kyruus Health https://kyruushealth.com/author/klomurray/ 32 32 CMS Needs to Make MRF Compliance Simpler https://kyruushealth.com/cms-needs-to-make-mrf-compliance-simpler/ Thu, 07 Oct 2021 09:16:53 +0000 https://healthsparq.com/?p=3191 Our team met with CMS to discuss MRF data elements that pose significant challenges to health plans as they work toward compliance with Transparency in Coverage mandates.

The post CMS Needs to Make MRF Compliance Simpler appeared first on Kyruus Health.

]]>
UPDATE: CMS made the change HealthSparq recommended to the plan ID. The MRF schema has been updated to have the plan ID as an array within the file instead of one file per plan ID. This will be a great improvement to reduce the risk of having many massive files.

In October 2020, CMS issued the Transparency in Coverage final rule mandating that health plans publish machine-readable files (MRFs) on their negotiated rates, out of network allowed amounts, and prescription (Rx) coverage rates and allowed amounts. The Rx file was recently deferred indefinitely, but public access for the other two still stand. Despite the new enforcement date of July 1, 2022, there are still compliance challenges for plans and groups. One area of particular concern is the data schema. The MRF schema creates major problems—for both the MRF creators and the public consumers of the files.

Major issues with MRF data schema include: 

  • Data redundancy. The proposed MRF schema has significant data redundancies that result in a massive amount of data duplication, aspects of which many health plans are likely struggling to manage.  
  • Massive files. The data schema results in many massive files that are unconsumable for the public via the download mechanism that CMS requires. The files may be in the terabyte size range. 
  • Non-downloadable, costly data. Files can’t be downloaded via https and are costly for health plans to host online. 

Our team already provided comments to CMS on changes around plan identifiers, place of service codes, and provider identifiers. We anticipate these changes could significantly reduce file size. CMS has already published changes around place of service codes and this week our team met with CMS to discuss the other data elements: 

  • Plan identification: Each file is organized by plan ID. It is common for many plans to leverage the same underlying contract rates which results in a massive duplication of data when there are potentially thousands to hundreds of thousands of plans using the same rates. Our recommended approach is to shift from one file per plan ID to instead list the plan IDs as an array within a file for all plans on the same contract.
  • Provider identifiers: This requires plans to duplicate the provider list over and over. This is redundant and makes the files unwieldy. The recommendation here is to list the NPI and TIN as an object under the negotiated rate.

We know others in the industry have expressed concerns about the schema, as well as the data being shared in these files. Based on our interactions with CMS, we expect an additional review process with impacted plans and anticipate updates to the schema to address the issues we noted with plan and provider identifiers. 

HealthSparq’s product and regulatory experts are monitoring for additional updates on price transparency mandates and the MRF data schema. We’ll be updating our clients on relevant changes as soon as we learn of them in order to support their compliance efforts. 

The post CMS Needs to Make MRF Compliance Simpler appeared first on Kyruus Health.

]]>
Diving into the Details: What You Need to Know About the Machine-Readable Files Mandate https://kyruushealth.com/diving-into-the-details-what-you-need-to-know-about-the-machine-readable-files-mandate/ Thu, 20 May 2021 08:47:33 +0000 https://healthsparq.com/?p=3097 Take a look at the requirements and important considerations for health plans when it comes to the machine-readable files mandate.

The post Diving into the Details: What You Need to Know About the Machine-Readable Files Mandate appeared first on Kyruus Health.

]]>
You are probably knee-deep in preparation for the Transparency in Coverage mandate as the first deadline for machine-readable files is quickly approaching on January 1, 2022. This upcoming requirement involves sharing new data likely housed across your organization—and even outside your organization. There is a lot to prepare for, but there is no need to carry that burden all by yourself. Let’s take a look at the requirements and important considerations for the three machine-readable files.

File-specific requirements

Each of the three machine-readable files needs to include specific information around covered items, services, and prescription drugs for your in- and out-of-network provider rates. Details of the three files include:

  • In-network negotiated rate files—Unique to this file is that rates for items and services of all contracted providers need to be included. If a plan uses leased networks, it is important that these data are blended with their local network data, resulting in one compliant machine-readable file. Each listed rate needs to be provider-specific and therefore associated with the National Provider Identifier (NPI), Tax Identification Number (TIN), and Place of Service Code for each provider, including the last date of the contract term or expiration date. If a plan uses next to a standard fee-for-service model other reimbursement arrangements like bundled payment arrangements, the primary billing code and total cost for the bundle need to be identified in the file as well as the list of services included.
  • Out-of-network allowed amount files—Like the in-network negotiated rate files, each rate for services and items needs to be associated with the NPI, TIN, and Place of Service Code for each out-of-network provider. Specific to this file type is that only historical payments for providers with more than 20 claims in the first 90 of the preceding 180 days need to be included. Also, out-of-network drug pricing needs to be included.
  • In-network negotiated prescription drug files—For each covered option, the National Drug Code (NDC) and the proprietary and nonproprietary name assigned to this code by the FDA need to be included. And like the other files, for each NDC the negotiated rates need to be provider-specific, associated with the NPI, TIN, and Place of Service Code, and associated with the last date of the contract term or expiration date for each of these negotiated rates. In addition to the current contract rates, historical net prices associated with the 90-day period beginning 180 days prior to the publication of the file also need to be included, unless there are fewer than 20 claims for payments.

Specifications for all machine-readable files

In addition to above-mentioned file-specific requirements, there are many common ones—from being updated every month to specific file formats and naming rules. If you are interested in being one of the first to know about possible changes to the repository, I recommend monitoring GitHub, the technical implementation guide for the tri-departmental price transparency. As of today, specific guidelines include:

  • Name and identifier—Each of the covered services and items of in-network providers need to include the name of the coverage option and the associated identifier, which should be the Health Insurance Oversight System (HIOS) number or if unavailable the related Employer Identification Number (EIN).
  • Billing/Rx Code—Billing codes like CPT, HCPCS, etc. for services and items or the National Drug Code (NDC) for prescription drugs.
  • Dollar amount—Rates for all items, services, and prescription drugs need to be displayed in dollar amounts.
  • File format—Files must conform to a non-proprietary, open standards format like JSON, XML, or YAML and made available via HTTPS to ensure the integrity of the data. Dates, file names, and file type names need to follow set standards to meet the mandate requirements.
  • Discoverability—Files need to be made available to the public without restrictions that would impede the re-use of that information. Search engine discoverability and accessibility to internet-based and mobile application developers need to be ensured to support development of innovative consumer-facing tools, as well as to other entities, such as researchers, and regulators.

Additional considerations

Aside from file types and data elements, there are other considerations for creation of the machine-readable files and making them available to the public. Do you want to create, blend, and host all your files? Do you want to have outside vendors handle this so your team can stay focused on core, strategic initiatives? Or do you work with a PBM who will supply you with a compliant Rx file and you only need help with the remaining two machine-readable files?

HealthSparq has almost a decade of experience with cost transparency and machine-readable file creation and we believe there is more to the mandate than just checking the box. We can help you to differentiate yourself in this competitive marketplace. In addition, you can deliver more than just the access to data to members, but guidance so they can make smarter healthcare choices. Contact Marketing@HealthSparq.com to learn more about how we support all requirements of the Transparency in Coverage mandate and the No Surprises Act.

If you are interested in mandate-relevant resources like webinars, white papers, and my latest video around machine-readable files please visit our price transparency content hub.

The post Diving into the Details: What You Need to Know About the Machine-Readable Files Mandate appeared first on Kyruus Health.

]]>
How Health Plans Can Prepare for FHIR Provider Directory Interoperability https://kyruushealth.com/how-health-plans-can-prepare-for-fhir-provider-directory-interoperability/ Tue, 31 Mar 2020 08:00:22 +0000 https://healthsparq.com/?p=2118 CMS and ONC recently finalized rules around interoperability that require qualified health plans to publicly expose a provider directory API and to allow members to access their claims and clinical information through a Patient Access API by 2021. These are our tips for getting started.

The post How Health Plans Can Prepare for FHIR Provider Directory Interoperability appeared first on Kyruus Health.

]]>
CMS and ONC recently finalized rules around interoperability that require qualified health plans to publicly expose a provider directory API and to allow members to access their claims and clinical information through a Patient Access API by 2021. Since 2012, HealthSparq has been delivering provider search as part of our health care transparency and guidance platform, HealthSparq One. Our intake and processing of provider and claims data prepared us well for the new rules and allowed us to get right to work with the interoperability community after the rules were first proposed.

In that work, our team gained insights that added to our existing API and data experience to help health plans prepare for the implementation of interoperability solutions. Below I share some key takeaways from the first and most crucial step in a health plan’s interoperability journey: planning. I then share a bit more detail about the solutions that HealthSparq has developed over the last year of work on interoperability. If you’re interested in hearing more about all the steps we took beyond these initial research and planning stages, take a look at our recent webinar, FHIR and Provider Directories: Lessons Learned and Ways Forward.

From the beginning, focus on the end result

After the proposed rules were announced in early 2019, we focused in on the specific implementation guides that would support compliance. Our SMEs worked directly with the implementation guide developers to really understand, test and then deliver feedback. We built out our MVP APIs to participate in connectathons to share our work and test connections with others in the community. From there, we continued to iterate and built out a scalable solution.

Having attended a lot of connectathons, I noticed a lot of “stub API” solutions. These can, at a basic level, do the things that are required for the interoperability APIs. But what’s missing is a solid back end. A stub API can be a great way to get up to speed on the FHIR resources and show off the power of interoperability. A simple stub API can meet initial requirements, but the hardest challenge will be how to develop a performant, scalable API. For the Provider Directory API in particular, it requires a public facing API, which for most health plans will require investment in new foundational technology to support a scalable solution. The requirements to build a fully scalable solution go above and beyond bare minimum, but we believe it’s worth it. This is the overarching lesson learned that I’d like to share: remain laser focused on building out a fully scalable solution.

Get deep in planning

The first thing that you need to understand as you get started are your organization’s goals, and how those goals relate to the interoperability rules. Then your organization should plan to map your data into the implementation guide.

My tips to get started:

  1. Understand your plan’s goals and how they relate to the interoperability rules. As you’ll learn, the required data for the FHIR resource, especially when it comes to the provider directory requirement, is actually pretty lightweight. Basically, it’s: name, phone number, location and specialties. So, if you just want to meet the mandate, hopefully you don’t have too much difficulty there. But, as you should be thinking about long-term scalability, you can also use the interoperability rules to help meet other goals that you have. For example, most plans have preferred providers. There’s going to be a lot of consumer apps, and they’re now going to basically have an opinion on what provider to guide your members to see. Consider leveraging those consumer apps to guide your members to preferred providers. But the consumer apps are only going to be able to do that if you’re sharing the right data with them. That could be tier information, or providers’ Areas of Focus. If you want to leverage interoperability to support your organization’s strategic goals, you’re probably going to have to go above and beyond the mandate. That means mapping more of your data into that FHIR resource.
  2. Dedicate efforts for mapping data. Make sure your organization has planning cycles for mapping data into your FHIR resource. HealthSparq started this effort by bringing our experts into a room, studying the implementation guide, asking a ton of questions and mapping the data, and we did all of that before trying to engineer a solution or standing something up. Get as many of your questions as possible out of the way early on, before you start building things out.
  3. Join the community to ensure the guides actually meet needs. My third recommendation here is really more a recommendation for us as a community. As I mentioned, HealthSparq worked extensively with the implementation guide authors in 2019, providing them feedback to evolve and improve the guides. Some of our feedback was around how data is actually structured in the real world. Some of it was about gaining clarity so the guides are usable. We’re going to need to continue to iterate and evolve. This is a standards-based community, so it very much requires your involvement. I recommend providing feedback on the implementation guides to make sure that it’s meeting the consumers’ needs. The implementation guide developers are very engaged and want feedback. Take advantage of that.
  4. Develop a data quality strategy and execution plan. One of the top pain points for health plans is data quality and completeness. The interoperability mandates don’t solve your data quality problems, but they could expose them publicly. You’re going to want to make sure that you take this seriously because it can be a competitive differentiator as more and more players get access to your data. CMS leverages large fines for inaccurate provider data. Do you have a data quality strategy and an execution plan? It should take into account some of the requirements from these mandates about how quickly data needs to be updated. How exactly are you getting accurate data? If you have data conflicts, how are you resolving those and how are you doing that in a timely fashion?

If this is an area that’s new to you, it is best to get started thinking about it now. If it’s something that you’ve already been working on for quite a while, make sure that you consider the new requirements within the mandate for how to deal with the situation.

Getting ahead of the interoperability mandate with HealthSparq

With our long history in provider directories and price transparency, we launched our HealthSparq Interoperability Services offering. This includes two FHIR APIs to support health plans: a Provider Directory FHIR API and a Patient Claims Access FHIR API. Through our learnings from our work with the interoperability community, we developed our infrastructure with security and scalability in mind. HealthSparq can help existing clients by leveraging the provider and claims data that is used in the member-facing application, HealthSparq One. HealthSparq’s robust data transformation process transforms health plan data into the necessary FHIR-based resources. Claims and encounter data is held in a secure, HIPAA-compliant data store. A consumer authorizes an application to access their digital claims data via an OAuth2 authorization flow within the SMART on FHIR open standards. Our microservices distribute the provider- or claims-based resources via REST API queries that can scale with your needs. HealthSparq’s services follow the Da Vinci PDex Plan-Net Implementation Guide  for provider directory and CARIN Consumer-Directed Payer Data Exchange Implementation Guide (formerly Blue Button 2.0) for patient access requirements.

CMS rules require a unified consumer experience for the Patient Access API. Claims and encounters, formularies and clinical data often reside in disparate systems. To help plans minimize the burden of compliance, HealthSparq also has the ability to provide unified access to support a plan hosted FHIR servers. HealthSparq manages the access and identity management, OAuth, analytics, etc. for distribution with our Proxy Services. HealthSparq will then proxy to the plan hosted FHIR servers when necessary.

We take an even deeper dive into the mandates and steps to take after planning, including building a solid infrastructure to deliver mandated public-facing APIs and really focusing in on ensuring data quality and completeness in the webinar here. I’d encourage you to take look and let me know where your organization currently is as you prepare for mandates in the comments below.

The post How Health Plans Can Prepare for FHIR Provider Directory Interoperability appeared first on Kyruus Health.

]]>
Seven Tips for Building a Scalable Data Foundation in Healthcare https://kyruushealth.com/seven-tips-for-building-a-scalable-data-foundation-in-health-care/ Wed, 04 Sep 2019 06:00:16 +0000 https://healthsparq.com/?p=1944 Today, 80 percent of health data remains unstructured and undigested. The healthcare sector is expected to produce 2,314 exabytes of data in 2020. One exabyte = one billion gigabytes. To...

The post Seven Tips for Building a Scalable Data Foundation in Healthcare appeared first on Kyruus Health.

]]>
Today, 80 percent of health data remains unstructured and undigested. The healthcare sector is expected to produce 2,314 exabytes of data in 2020. One exabyte = one billion gigabytes. To put that into perspective, there are 2.8 billion monthly active users on Facebook and all of the photos, videos, comments, likes, ads and other data Facebook stores amounts to only 300,000 gigabytes[i].  Healthcare needs to focus on building a scalable data foundation.

Healthcare companies are investing significant amounts of money to develop and execute data strategies to extricate valuable information from the surging data ocean, while boosting organization-wide growth and success. Better use of Big Data can bring fundamental changes in the healthcare system, including improving outcomes and efficiency at lower costs. But many companies are finding that there is a disconnect between the investment they’ve made and the value being generated. In fact, even adoption of initiatives under way is proving to be complex, with 77% of executives reporting that “business adoption of Big Data and AI initiatives remain a major challenge[ii].”

In the race to innovate and maximize the value of data, many companies have failed to set up the right strategy and infrastructure to scale and meet the deluge of impending data demands – not only to accommodate additional use cases for their data, but to accommodate more types of data. This failure renders rapid innovation impossible and can lead to existing use cases becoming outdated. One of the biggest challenges for a company today is building a compelling and scalable data foundation strategy that aligns with overall company goals. To address this issue, below are seven tips learned from experience (aka: the hard way), to help you set a solid foundation for your healthcare data products and avoid the pitfalls many teams face today.

Tip 1: Know That Data Silos Are Your Enemy

Data silos are the enemy of any data program. They prevent healthcare organizations from capturing a holistic view of the patient, their history, genomics, socioeconomic factors and other determinants of health. This could lead to inefficiencies in care delivery and management, potentially impacting patient outcomes. Many companies don’t have a data product vision and strategy, instead they have a vision for a product, which is powered by data. They assume the data will be there when they need it, without knowing the product specifics (which often come through user testing, feature enhancements, general iterative growth) nor having a data strategy in place. This leads to a reactionary process where data teams don’t know how they fit into the big picture and are left to react to constant requests. One of the most prevalent challenges this structure creates is data silos – which hurt teams now and hurt companies more later.

A common challenge with data silos is multiple teams solving for a common problem in unique or duplicative ways. This often leads to wasted effort by forcing multiple teams to manage the same process in different areas. Teams either spend a lot of time coordinating with each other to keep their processes in sync or they don’t coordinate, and the processes are out of sync. When the processes are out of sync, you’ll find different answers to the same question based on the team you ask. To avoid data silos and the hazards associated with them, your data strategy should focus on data accessibility.

Tip 2: A Successful Scalable Data Foundation Strategy is Founded on Accessibility

In the past, numerous approaches have been adopted to address data silos and prove a data strategy. Enterprise data warehouses and data lakes became popular as ways to support data accessibility, but they can quickly grow into a change management nightmare or into the dreaded data swamp[iii]. Instead of investing significant resources in single large points of failure such as a data lake, a successful data strategy should focus on the culture and vision around data accessibility as well as investing in the underlying techniques.

Having data accessibility as a foundational principle of your data teams will influence how data projects are developed. When developing new products or processes, consider additional use cases early. By asking the team and additional stakeholders how they would use this data you can ensure you’re considering the longer-term vision of a product. In many cases, asking questions up front and making simple tweaks accordingly during early stages of the planning process can lead to a lot more long-term value. Without a vision for the future, team members will make decisions that make sense in the short term but box the company into data silos or one-off use cases in the long term.

Tip 3: A Vision is Not a Roadmap

When developing a vision for building a scalable data foundation, it’s important to differentiate between a vision and a detailed roadmap. In many organizations, someone will say they have a vision when they instead have detailed requirements or a list of predefined features to be executed. When you believe you already know all the components that need to be developed, you can become rigid and avoid learning about what’s working as you go. Instead, a product vision should represent the core sense of the product, what you aim to achieve, the opportunities and the threats.

Tip 4: Don’t Fail at Failure

boy falling off skateboardWhile the larger vision is important, it’s also important to support fast iterations and low-cost failures in order to determine the elements to achieve the vision. Product management literature has long focused on how to iterate fast for traditional user interface (UI) based products, but the concept has been slower to develop around data products. Many healthcare companies aren’t leveraging iterative methodologies and tend to want to solve all edge cases in the first pass, lacking focus toward their goal. As a result, the healthcare sector has often had slower and more costly development cycles than other industries, without learning and building a better product for their users along the way. When developing data products, we typically find stakeholders have numerous ideas. It’s important to be able to quickly test those ideas to determine whether they are viable, how you’d want to solve it, whether the use case requires additional work, and to identify the edge cases. Having a method to perform numerous rapid prototypes is key to scaling and increasing the development cycle’s velocity. If instead you attempt to productionize all your ideas, you’ll get bogged down with edge cases or trying to scale something that doesn’t have value.

Tip 5: Invest in the Right Underlying Infrastructure

A key to both rapid prototyping and scaling data products once they have been vetted is investing in the underlying infrastructure. When your data teams are forced to use a single point solution such as an enterprise data warehouse for all projects, they can be boxed in and feel they aren’t able to deliver features that meet expectations. Leveraging cloud technologies such as Google Cloud, Amazon Web Services or Microsoft Azure is a great way to allow your teams to take advantage of the numerous efficiencies gained in this area and focus the team on developing data products rather than managing infrastructure.

Tip 6: Accessibility, Security and Data Privacy Are Not Mutually Exclusive

As a part of investing in the underlying infrastructure, it’s important to build with both accessibility and security in mind. Particularly in healthcare, which was plagued by more breaches than any other sector in 2018, accounting for 25 percent of incidents[iv]. Accessibility, security and data privacy are not mutually exclusive. Avoid the trap of thinking that if something is hard for you to access, it is secure or that because something is easily accessible to the people that need access, it is somehow unsecure. Design and invest in infrastructure that can have fine grain access and permission controls. It should be easy for users with permissions to consume and leverage data while being inaccessible to those without permissions. Those users with permission to access and build off the data should have a walled garden that allows them to work quickly but without allowing them to do things their permissions don’t allow. If you build with security in mind, you can also be sure that you are minimizing risk at every step.

Tip 7: Get Hip, Stay Fresh and Keep Learning

Who doesn’t love new things? One of the best things about working at the forefront of data products is that new tools and techniques can be used. A key element to allowing your data teams to work quickly while emphasizing accessibility and security is to invest in techniques like data anonymization, synthetic data and differential privacy. Supplying your data teams with strong tools for generating synthetic data and allowing your teams to develop against synthetic data that they can manipulate can reduce the risk of data exposure while also ensuring they test against more edge cases. Data anonymization can also be a tool for allowing more users to access data for certain use cases without exposing personally identifiable information. Data anonymization can be especially useful for analytics teams to perform reporting and research. If your reporting and analytics team doesn’t need to know actual users, providing them access to anonymized data or via differential privacy can be a great method to meet their needs while protecting an individual’s privacy.

By focusing on data accessibility and building a scalable data foundation from the beginning, healthcare companies can identify viable data products more quickly and bring them to market faster. If harnessed effectively, healthcare data has the potential to lower costs, reduce hospital re-admissions and emergency department visits, prevent adverse drug effects, improve care coordination, and much more. With improved data sharing, heath systems, health plans and other organizations can offer consumers more innovative care, better experiences, and change the way we think about healthcare.

Schedule a demo with HealthSparq to see how we can help your organization.

The post Seven Tips for Building a Scalable Data Foundation in Healthcare appeared first on Kyruus Health.

]]>