Medicaid Management Services: Analytical Essay
- Topics: Medicaid
- Words: 2696
- |
- Pages: 6
- This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.
Download
Short on time?
Get essay writing helpDownload
Optum is a leading health services and innovation firm that operates as part of UnitedHealth Group. UnitedHealth Group Incorporated owns and manages organized health systems within U.S. and internationally. It provides operational, technological, and consulting solutions and services to individuals, pharmaceutical companies, healthcare organizations, and national and state governments. Optum aims to assist people live healthier lives and make the health system work better for all.
Reports show that healthcare expenses in U.S. accounted for around 18% of GDP. Half of health care spending in U.S. is utilized just to treat 5% of the population. Optum serves around 250,000 health professionals and 6,200 hospitals, thereby providing services to 1 in every 5 US consumers.
In the United States, Medicaid is a program administered by the federal and state governments. It provides health coverage to people with limited resources and income. Optum has over 30 years experience helping Medicaid agencies tackle their biggest challenges, serving 7.4 million Medicaid beneficiaries. Optum Medicaid Management Services aim to enhance Medicaid program performance through care management, consulting, and technology solutions. OMMS services-based modules provide clients with an innovative solution to MMIS (Medicaid Management Information System) modernization, leveraging managed care best practices. Optum Medicaid Management Services platform is developed using commercial off-the-shelf (COTS) software that guarantees managed care, making implementation lower risk than a traditional MMIS. Clients can better manage their Medicaid programs using these services-based modules.
Longitudinal data is a data that is collected through repeated observations of the same targets over an extended period of time. It is useful for measuring change. Longitudinal patient data incorporates information about a patient’s medical history and health care utilization over some extended time frame. This data can be to track patient treatments over time or answer a specific business question. Sources of data can be hospitalized patient billings, retail pharmacy prescription claims, and medical claims data.
Medicaid Management Services aim to improve Medicaid program performance through care management, consulting, and technology solutions. Optum Medicaid Management Services platform is built from commercial off-the-shelf (COTS) software that is proven in managed care, making implementation lower risk than a traditional MMIS. OMMS services-based modules provide clients with an innovative solution to MMIS (Medicaid Management Information System) modernization, leveraging managed care best practices. Clients can better manage their Medicaid programs using these services-based modules.
The system receives information about members enrolled in a Medicaid health care benefit plan. The system must validate the input data to check whether the member is eligible for the health care benefit plan. An insurance claim is a request submitted to an insurance company by a policyholder for coverage or compensation for a policy event or covered loss. Only if the member is eligible for the benefit plan, the insurance claim submitted by the member is issued payment. The member enrollment and eligibility information is:
Medicaid Management services platform is built using commercial off-the-shelf (COTS) software that is proven to ensure managed care. OMMS modules are flexible and configurable. They can interoperate with existing systems, allowing customers the flexibility to purchase only the modules they require. Medicaid Management services are also responsible for maintaining the system for federal regulatory and technology changes.
The EDI 834 document contains information about enrollment of members in a health care benefit plan. It is used by insurance agencies and government agencies, to enroll members in a benefit plan. HIPAA specifies EDI 834 as the standard for the electronic exchange of member enrollment information, including employee demographic information, plan subscription, and benefits. The 834 transaction supports the following functions relative to health plans:
A typical 834 document also includes the following information:
In addition to 834 files, input may also consist of supplemental data for a member.
Supplemental data consists of additional clinical information received by a health plan about a member, beyond administrative claims. Supplemental data saves time and money as it simplifies data attainment, ends the need for chasing individual charts, and improves the data available for reporting and patient analytics.
In healthcare, Health Insurance Portability and Accountability Act (HIPAA) mandates protecting the information of patients. It provides detailed instructions for protecting and handling a patient’s personal health information. HIPAA Security Rule ensures:
The input 834 file undergoes HIPAA validation to ensure that it is HIPAA compliant. Only valid 834 files are taken further for processing.
This job parses the standard EDI 834 file and loads data from the file into the staging layer for further data processing.
The 834 File Processing Job translates the 834 files to Oracle layout and loads the data into the temporary data store. The Temporary Data Store is an Oracle database that acts as a staging layer. It holds the 834 file data for further processing. Various steps involved in the data processing are:
Client may provide input in a certain way, which may not be supported by the system. Hence, the client-specific values are mapped to the standard values that the system supports.
Client-specific and global business rules are stored in the database. The will be executed one by one to validate the data sent by the client at different steps of data processing. Application would mark the records as errors if any of these rules fail to execute on it.
Valid data is inserted into the data store for later use.
Input provided by the client may not be supported by the claims processing system. Hence, the client-specific values are transformed to a standard that can be fed to the claims processing system.
Existing data in the claims processing system is loaded into the staging layer. This is done to compare the new incoming data with the existing data in the system and perform appropriate operations.
After comparing the client input data with the data present in the system, client-specific and global business rules are applied. Only the valid records are sent to the claims processing system.
Input data is compared with the existing data in the system. This is to identify whether a record is a new one or already exists in the system. Based on this, one of the following actions is taken on the incoming record:
This is the last module of file preprocessing. After Event Handler Processing and Change Record Identification, data is loaded into the final tables for claims processing. From these tables, a file is generated which is of key-value data format. This file is loaded into the claims processing system.
Data Store is an oracle database where all the information that comes from the client is stored. It is an exact snapshot of data in the input 834 file. It is a master database which is used to store processed data for later use. It is a single source of truth for member information.
An insurance claim is a request submitted to an insurance company by a policyholder for coverage or compensation for a policy event or covered loss. The insurance company verifies the claim and, if valid, issues payment to the insurance policy holder or an approved party on behalf of the insured. The claims processing system checks whether a submitted claim is valid or not. Only the valid claims are issued payment.
Longitudinal data is a data that is collected through repeated observations of the same targets over an extended period of time. It is useful for measuring change. Longitudinal studies give unique and meaningful insights.
Longitudinal patient data incorporates information about a patient’s medical history and health care utilization over some extended time frame. This data can be to track patient treatments over time or answer a specific business question. Longitudinal Data Engineering involves storing, processing, and enrichment of Longitudinal Data.
Data Pipeline refers to set of processing elements that move data from one source system to another, carrying out data transformation along the way. It serves as a processing engine, sending your data through filters, transformative applications, and APIs instantly. It eliminates many manual steps from the process and enables a smooth, automated flow of data from one station to the next. Data Pipeline starts by defining what, where, and how data is collected. It automates the processes involved in extracting, processing, aggregating, validating, and loading data for further analysis and visualization. It provides end-to-end velocity and ensures reliability by eliminating errors and combatting bottlenecks or latency.
Data pipeline is especially helpful when:
ETL stands for Extract, Transform, and Load. ETL systems extract data from one system, perform transformations on the data and load the data into a data warehouse or database. ETL is subset of a data pipeline. In a Data Pipeline, data can be loaded to multiple targets, such as a data lake or an AWS bucket.
Apache Spark is a general-purpose engine for processing data on a large scale. It works on a cluster of computers. It is a distributed processing engine. Apache Spark ecosystem comprises of:
Heart of Spark is the Spark core that consists of computing engine and sparks core API. Spark core supports distributed data processing. It follows master-slave architecture. For each spark application, it creates 1 master (driver) and many slave processes (executor). Driver is responsible for analyzing, distributing, scheduling, and monitoring work across executors. Executors run the code given by driver and return the status back to the driver. Spark breaks our application into tasks and gives it to the executors. It thereby carries out parallel execution. The steps involved in the processing are:
Longitudinal Data Pipeline runs on a spark cluster.
Longitudinal Data Pipeline is a scalable, reliable, automated data processing pipeline that stores, and processes Longitudinal Data.
All customer data, irrespective of the source, is initially in its raw form. Data that has not been changed since its acquisition is referred as raw data. Cleaning and transformations are performed to convert the raw data into a standard format that can be analyzed.
This stage involves building data enrichment pipelines specific to the product.
The two most common Kinds of Data Enrichment are:
In this stage, pipelines will be implemented to read from various locations and populate product data marts.
Figure 3: Longitudinal Data Pipeline Stages Infrastructure:
Data Storage, Modelling, and Specifications:
Medicaid Management Services aim to improve Medicaid program performance through care management, consulting, and technology solutions. They provide the following benefits to the client:
Longitudinal Data Engineering involves storing and processing of Longitudinal Data. It provides the following benefits:
During the first three months of the internship, I worked with the Longitudinal Data Engineering team. The Longitudinal Data Engineering team focuses on storing and processing of Longitudinal Data. Data enrichment, cohort analysis, and data extraction are carried out to derive useful insights from the data.
For the next two months, I worked with the Optum Medicaid Management Services team. Optum Medicaid Management Services aim to improve Medicaid program performance with its technology solutions. Flexible and configurable modules are developed using commercial off-the-shelf (COTS) software, thereby providing managed care.
The internship I had at OptumInsight was a great chance of learning and professional development. I have learnt a lot from my work at Optum. I have gained many new skills and improved my technical knowledge. Also, I was able to face the workload in a real-time scenario working place. It was an excellent and rewarding experience.
Fair Use Policy
EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.