Skip to content

Using cluster analysis to build the HAC, HDBSCAN and K-medoids models in order to find a lower dimension representation of the data.

Notifications You must be signed in to change notification settings

JuitingHu/Cluster-analysis-of-Insurance-data

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cluster analysis of Insurance data

The project is done by two other classmates and I in cooperation with the car insurance company.

Nowadays, most insurance companies use machine learning widely to help their business operations. It is common to see supervised learning applied to customer segmentation, claims and risks prediction and fraud detection. These supervised learning tasks usually include various numeric and categorical features.
📍numeric = customer age, vehicle age... ; category = whether or not the car is manual or automatic...
However, some high cardinality categorical variables (hccv) are hard to deal with because there are many levels such as vehicle type. To deal with this, further information is gathered on the hccv to form a new dataset and a lower dimension representation of the dataset is found which can be input into the models.

This dataset provided in this project is exactly this scenario. It contains 65,340 data entries and 347 features describing a hccv. The task set is to find a lower dimension representation of the data to be used in downstream tasks.

Before starting, we review lots of literature to find which analysis methods suit for this dataset. Clustering is the chosen method to do this.
The reasons:
▪️ Cluster is one of unsupervised learning methods.
▪️ Cluster can be used in big data.
▪️ Through cluster analysis, data objects are grouped into several clusters. Data objects within the same cluster are more similar, and vice versa. This can represent a lower dimension of data.

The following steps are:
▪️ Data Cleaning.
▪️ Distance matrix computation.
▪️ Using three algorithms, hierarchical agglomerative clustering(HAC), hierarchical density-based-spatial clustering of applications with noise (HDBSCAN) and k-medoids to build the models.

Further detailed information on this project is available in "BUSM130 Group Report.pdf","BUSM131_170376051_Individual_report.pdf" and "Presentation.pdf".

About

Using cluster analysis to build the HAC, HDBSCAN and K-medoids models in order to find a lower dimension representation of the data.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published