Short on time?

Get essay writing help

Medicaid Management Services: Analytical Essay

  • Words: 2696
  • |
  • Pages: 6
  • This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Optum is a leading health services and innovation firm that operates as part of UnitedHealth Group. UnitedHealth Group Incorporated owns and manages organized health systems within U.S. and internationally. It provides operational, technological, and consulting solutions and services to individuals, pharmaceutical companies, healthcare organizations, and national and state governments. Optum aims to assist people live healthier lives and make the health system work better for all.

Reports show that healthcare expenses in U.S. accounted for around 18% of GDP. Half of health care spending in U.S. is utilized just to treat 5% of the population. Optum serves around 250,000 health professionals and 6,200 hospitals, thereby providing services to 1 in every 5 US consumers.

In the United States, Medicaid is a program administered by the federal and state governments. It provides health coverage to people with limited resources and income. Optum has over 30 years experience helping Medicaid agencies tackle their biggest challenges, serving 7.4 million Medicaid beneficiaries. Optum Medicaid Management Services aim to enhance Medicaid program performance through care management, consulting, and technology solutions. OMMS services-based modules provide clients with an innovative solution to MMIS (Medicaid Management Information System) modernization, leveraging managed care best practices. Optum Medicaid Management Services platform is developed using commercial off-the-shelf (COTS) software that guarantees managed care, making implementation lower risk than a traditional MMIS. Clients can better manage their Medicaid programs using these services-based modules.

Longitudinal data is a data that is collected through repeated observations of the same targets over an extended period of time. It is useful for measuring change. Longitudinal patient data incorporates information about a patient’s medical history and health care utilization over some extended time frame. This data can be to track patient treatments over time or answer a specific business question. Sources of data can be hospitalized patient billings, retail pharmacy prescription claims, and medical claims data.

Chapter 2. Methodology and approach

2.1 Medicaid Management Services

Medicaid Management Services aim to improve Medicaid program performance through care management, consulting, and technology solutions. Optum Medicaid Management Services platform is built from commercial off-the-shelf (COTS) software that is proven in managed care, making implementation lower risk than a traditional MMIS. OMMS services-based modules provide clients with an innovative solution to MMIS (Medicaid Management Information System) modernization, leveraging managed care best practices. Clients can better manage their Medicaid programs using these services-based modules.

The system receives information about members enrolled in a Medicaid health care benefit plan. The system must validate the input data to check whether the member is eligible for the health care benefit plan. An insurance claim is a request submitted to an insurance company by a policyholder for coverage or compensation for a policy event or covered loss. Only if the member is eligible for the benefit plan, the insurance claim submitted by the member is issued payment. The member enrollment and eligibility information is:

  • stored in a master database for later use
  • sent to the claims processing system to process claims

Medicaid Management services platform is built using commercial off-the-shelf (COTS) software that is proven to ensure managed care. OMMS modules are flexible and configurable. They can interoperate with existing systems, allowing customers the flexibility to purchase only the modules they require. Medicaid Management services are also responsible for maintaining the system for federal regulatory and technology changes.

2.2 Member Eligibility File ProcessingFigure 1: Member Eligibility File Processing System

2.2.1 Input Member Data: EDI 834 File

The EDI 834 document contains information about enrollment of members in a health care benefit plan. It is used by insurance agencies and government agencies, to enroll members in a benefit plan. HIPAA specifies EDI 834 as the standard for the electronic exchange of member enrollment information, including employee demographic information, plan subscription, and benefits. The 834 transaction supports the following functions relative to health plans:

  • New enrollments
  • Changes in a member’s enrollment
  • Reinstatement of a member’s benefit enrollment
  • Disenrollment of members

A typical 834 document also includes the following information:

  • Product/service identification
  • Plan network identification
  • Subscriber name and identification
  • Subscriber eligibility and/or benefit information

In addition to 834 files, input may also consist of supplemental data for a member.

Supplemental data consists of additional clinical information received by a health plan about a member, beyond administrative claims. Supplemental data saves time and money as it simplifies data attainment, ends the need for chasing individual charts, and improves the data available for reporting and patient analytics.

2.2.2 HIPAA Validation

In healthcare, Health Insurance Portability and Accountability Act (HIPAA) mandates protecting the information of patients. It provides detailed instructions for protecting and handling a patient’s personal health information. HIPAA Security Rule ensures:

  • privacy of health information
  • administrative simplification
  • security of electronic records
  • insurance portability

The input 834 file undergoes HIPAA validation to ensure that it is HIPAA compliant. Only valid 834 files are taken further for processing.

2.2.3 834 File Processing Job

This job parses the standard EDI 834 file and loads data from the file into the staging layer for further data processing.

2.2.4 Temporary Data Store

The 834 File Processing Job translates the 834 files to Oracle layout and loads the data into the temporary data store. The Temporary Data Store is an Oracle database that acts as a staging layer. It holds the 834 file data for further processing. Various steps involved in the data processing are:

  1. 1. Transform client-specific values to a common standard:

Client may provide input in a certain way, which may not be supported by the system. Hence, the client-specific values are mapped to the standard values that the system supports.

  1. 2. Execute Customer Specific Business Rule and Global Business Rules:

Client-specific and global business rules are stored in the database. The will be executed one by one to validate the data sent by the client at different steps of data processing. Application would mark the records as errors if any of these rules fail to execute on it.

  1. 3. Load the data into the Data Store:

Valid data is inserted into the data store for later use.

  1. 4. Transform Client feed data to standard data supported by the claims processing system:

Input provided by the client may not be supported by the claims processing system. Hence, the client-specific values are transformed to a standard that can be fed to the claims processing system.

  1. 5. Load the existing claims data into the temporary data store:

Existing data in the claims processing system is loaded into the staging layer. This is done to compare the new incoming data with the existing data in the system and perform appropriate operations.

  1. 6. Execute Customer Specific and Global Business Rules for claims processing:

After comparing the client input data with the data present in the system, client-specific and global business rules are applied. Only the valid records are sent to the claims processing system.

  1. 7. Event Handler Processing And Change Record Identification:

Input data is compared with the existing data in the system. This is to identify whether a record is a new one or already exists in the system. Based on this, one of the following actions is taken on the incoming record:

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place Order
  • Incoming record is inserted into the system
  • Incoming record is ignored if it is marked as an error
  • Existing record in the system is updated.
  1. 8. Loading Data into the Final Tables For Claims Processing:

This is the last module of file preprocessing. After Event Handler Processing and Change Record Identification, data is loaded into the final tables for claims processing. From these tables, a file is generated which is of key-value data format. This file is loaded into the claims processing system.

  1. 9. Claims Processing: The claims processing job is triggered. Once the job execution has completed, the file status is updated as follows:
  • Complete: upon successful file processing
  • Incomplete: upon unsuccessful file processing or system error. Figure 2: Member Data Processing Steps

2.2.5 Data Store

Data Store is an oracle database where all the information that comes from the client is stored. It is an exact snapshot of data in the input 834 file. It is a master database which is used to store processed data for later use. It is a single source of truth for member information.

2.2.6 Claims Processing System

An insurance claim is a request submitted to an insurance company by a policyholder for coverage or compensation for a policy event or covered loss. The insurance company verifies the claim and, if valid, issues payment to the insurance policy holder or an approved party on behalf of the insured. The claims processing system checks whether a submitted claim is valid or not. Only the valid claims are issued payment.

2.3 Longitudinal Data Engineering

Longitudinal data is a data that is collected through repeated observations of the same targets over an extended period of time. It is useful for measuring change. Longitudinal studies give unique and meaningful insights.

Longitudinal patient data incorporates information about a patient’s medical history and health care utilization over some extended time frame. This data can be to track patient treatments over time or answer a specific business question. Longitudinal Data Engineering involves storing, processing, and enrichment of Longitudinal Data.

Data Pipeline

Data Pipeline refers to set of processing elements that move data from one source system to another, carrying out data transformation along the way. It serves as a processing engine, sending your data through filters, transformative applications, and APIs instantly. It eliminates many manual steps from the process and enables a smooth, automated flow of data from one station to the next. Data Pipeline starts by defining what, where, and how data is collected. It automates the processes involved in extracting, processing, aggregating, validating, and loading data for further analysis and visualization. It provides end-to-end velocity and ensures reliability by eliminating errors and combatting bottlenecks or latency.

Data pipeline is especially helpful when:

  • Huge amount of data is to be processed
  • Highly sophisticated data analysis is required
  • Data is to be stored in the cloud

ETL

ETL stands for Extract, Transform, and Load. ETL systems extract data from one system, perform transformations on the data and load the data into a data warehouse or database. ETL is subset of a data pipeline. In a Data Pipeline, data can be loaded to multiple targets, such as a data lake or an AWS bucket.

SPARK

Apache Spark is a general-purpose engine for processing data on a large scale. It works on a cluster of computers. It is a distributed processing engine. Apache Spark ecosystem comprises of:

  • cluster computing engine
  • Libraries
  • APIs
  • DSLs

Heart of Spark is the Spark core that consists of computing engine and sparks core API. Spark core supports distributed data processing. It follows master-slave architecture. For each spark application, it creates 1 master (driver) and many slave processes (executor). Driver is responsible for analyzing, distributing, scheduling, and monitoring work across executors. Executors run the code given by driver and return the status back to the driver. Spark breaks our application into tasks and gives it to the executors. It thereby carries out parallel execution. The steps involved in the processing are:

  • read data from the source
  • load the data into spark
  • process the data
  • hold intermediate results
  • write the results to the destination

Longitudinal Data Pipeline runs on a spark cluster.

2.4 Longitudinal Data Pipeline

Longitudinal Data Pipeline is a scalable, reliable, automated data processing pipeline that stores, and processes Longitudinal Data.

Various stages in the Longitudinal Data Pipeline are:

  1. 1. Conversion of Raw Longitudinal Data to Standard Longitudinal Data:

All customer data, irrespective of the source, is initially in its raw form. Data that has not been changed since its acquisition is referred as raw data. Cleaning and transformations are performed to convert the raw data into a standard format that can be analyzed.

  1. 2. Enrichment of Standard Longitudinal Data:
  • A dataset, no matter how detailed, may not contain all the data needed. A data lake or a data swamp full of raw information is often not useful outside of narrow contexts.
  • Data enrichment involves merging third-party data from an external authoritative source with an existing database of first-party customer data. This enhances the existing data and helps make more informed decisions. Data enrichment makes this raw data more useful and detailed.
  • Hence data enrichment process is vital.
  1. 3. Cohort Analysis and Data Aggregation:
  • Cohort analysis is a subset of behavioral analytics that looks at groups of people who have taken a common action during a select period of time. Instead of looking at all of the subjects as a single unit, cohort analysis divides them into groups to help discover patterns.
  • Cohort analysis allows us to analyze only the relevant data, ask a very specific question and take action on it.
  • It allows you to quickly and effectively test your hypothesis and to get relevant feedback far more quickly.
  1. 4. Data Extraction:
  • Data extraction involves retrieving data from a data source to store the data or for further processing.
  • Data extraction is the most important part of the ETL process because it involves deciding which data is most important to achieve the business goal driving the ETL.
  • Decisions at this point can heavily influence the use of the data downstream.
  • Data extraction is very useful when dealing with data on a large scale to generate meaningful information.
  1. 5. Master Data Management (MDM) :
  • Master data refers to the business objects containing the most agreed upon and valuable information.
  • Master data management (MDM) combines the data collection processes, IT techno,logy and software tools to increase the consistency and accuracy of data and for coordination of data
  • In data mastering, an unmastered data source record is merged or linked with a master data record. In data mastering, source data record is either linked to an existing master data record or a new master data record is created.
  • It helps improve the uniformity and quality of key data assets
  1. 6. Product-Specific Data Enrichment:

This stage involves building data enrichment pipelines specific to the product.

The two most common Kinds of Data Enrichment are:

  • Demographic Data Enrichment: To achieve patient demographic details
  • Geographic Data Enrichment: To achieve patient geography details
  1. 7. Product Data Loader Pipelines:

In this stage, pipelines will be implemented to read from various locations and populate product data marts.

Figure 3: Longitudinal Data Pipeline Stages Infrastructure:

  • Data at every step is stored and processed using consistent, standard technology stack and infrastructure
  • Infrastructure built-int in automated monitoring
  • Pipeline is running on scalable, reliable, and automated infrastructure
  • Validations and automation tests are built into the pipeline
  • Build and Deployment of code is happening in automated way
  • Workflows and processes are standardized and automated
  • Errors at any step is tracked using standard tools
  • Access Control is handleda centrally with well-defined process
  • Versioning of pipeline code occurs in a well-defined manner.

Data Storage, Modelling, and Specifications:

  • Standardized Data Specification for Raw, Processed, and Enriched data
  • Data for each step in the process is standardized and stored in data lake
  • Enrichments of Standard entities are done using enterprise products
  • There is a well-defined way to store data from multiple longitudinal data sources
  • Product Specific Enrichments is stored in data lake
  • There is a well-defined way to version data
  • There is well-defined way of doing incremental data loads in product stores and in data lake

Chapter 3. Results and discussion

Medicaid Management Services:

Medicaid Management Services aim to improve Medicaid program performance through care management, consulting, and technology solutions. They provide the following benefits to the client:

  • SaaS-based evergreen solution, architected for future requirements and expansion
  • High availability, redundant, and highly secure infrastructure
  • Platform is pre-built to support multiple clients
  • Scalable architecture, vertically and horizontality
  • OMMS modules are flexible and configurable. They can interoperate with existing systems, allowing customers the flexibility to purchase only the modules they require.
  • Platform is built from commercial off-the-shelf (COTS) software that is proven in managed care.
  • System is maintained for federal regulatory and technology changes.

Longitudinal Data Engineering:

Longitudinal Data Engineering involves storing and processing of Longitudinal Data. It provides the following benefits:

  • Data processing pipeline is scalable, reliable, and automated
  • Helps answer a specific business question
  • Helps track patient treatments over time gives unique and meaningful insights
  • Infrastructure has built-in automated monitoring

Conclusion

During the first three months of the internship, I worked with the Longitudinal Data Engineering team. The Longitudinal Data Engineering team focuses on storing and processing of Longitudinal Data. Data enrichment, cohort analysis, and data extraction are carried out to derive useful insights from the data.

For the next two months, I worked with the Optum Medicaid Management Services team. Optum Medicaid Management Services aim to improve Medicaid program performance with its technology solutions. Flexible and configurable modules are developed using commercial off-the-shelf (COTS) software, thereby providing managed care.

The internship I had at OptumInsight was a great chance of learning and professional development. I have learnt a lot from my work at Optum. I have gained many new skills and improved my technical knowledge. Also, I was able to face the workload in a real-time scenario working place. It was an excellent and rewarding experience.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this Page

Medicaid Management Services: Analytical Essay. (2022, September 27). Edubirdie. Retrieved February 1, 2023, from https://edubirdie.com/examples/medicaid-management-services-analytical-essay/
“Medicaid Management Services: Analytical Essay.” Edubirdie, 27 Sept. 2022, edubirdie.com/examples/medicaid-management-services-analytical-essay/
Medicaid Management Services: Analytical Essay. [online]. Available at: <https://edubirdie.com/examples/medicaid-management-services-analytical-essay/> [Accessed 1 Feb. 2023].
Medicaid Management Services: Analytical Essay [Internet]. Edubirdie. 2022 Sept 27 [cited 2023 Feb 1]. Available from: https://edubirdie.com/examples/medicaid-management-services-analytical-essay/
copy
Join 100k satisfied students
  • Get original paper written according to your instructions
  • Save time for what matters most
hire writer

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.