Titanic is a classical Kaggle competition. The task is to predicts which passengers survived the Titanic shipwreck. For more detail, refer to https://www.kaggle.com/c/titanic/overview/description.
As it is a famous competition, there exists lots of excelent analysis on how to do eda and how to build model for this task. See https://www.kaggle.com/startupsci/titanic-data-science-solutions for a reference. In this notebook, we will show how dataprep.eda can simplify the eda process using a few lines of code.
[1]:
from dataprep.eda import * from dataprep.datasets import load_dataset train_df = load_dataset("titanic") train_df
891 rows × 12 columns
The first thing we need to do is to rounghly understand the data. I.e., how many columns are available, which columns are categorical, which columns are numerical, and which column contains missing values. In dataprep.eda, all of the above questions could be answered in just one line of code!
[2]:
plot(train_df)
The plot(df) shows the distribution of each column. For a categorical column, it shows the bar chart with blue color. For a numeric column, it shows the histgorm with gray color. Currently, the column type (i.e., categorical or numeric) is based on the column type in input dataframe. Hence, if some column types is wrongly identified, you could change its type on the dataframe. For example, by calling df[col] = df[col].astype(“object”) you could identify col as a categorical column.
From the output of plot(df), we know: 1. All Columns: there are 1 label column Survived and 11 feature columns, which are PassengerId, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked. 2. Categorical Columns: Survived, PassengerId, Pclass, Name, Sex, Ticket, Embarked. 3. Numeric Columns: Age, SibSp, Parch, Fare. 4. Missing Values: From the figure title, we can find there are 3 columns with missing values. I.e., Age (19.9%), Cabin (77.1%), Embarked(0.2%). 5. Label Balance: From the distribution of Survived, we are aware that the positive and negative training examples and not very balanced. There are 38% data with label Survived = 1.
After we roungly know the data, next we want to understand how each feature is correlated to the label column. ### 5.1 Age, Cabin, and Embarked: features with missing values. We first take a look at features with missing values: age, cabin and embarked. To understand the missing value, we first call plot_missing(df) to see whether the missing values have any underlaying pattern.
[3]:
plot_missing(train_df)
plot_missing(df) shows how missing values are distributed in the input data. From the output, we know that the missing value is uniformly distribution among records, and there is no underlaying pattern. Next, we decide how to handle the missing values: should we remove the feature, remove the rows contain missing values, or filling the missing values? We first analyze whether they are correlated to Survived. If they are correlated, then we may do not want to remove the feature. We analyze the correlation between two columns by calling plot(df, x, y).
[4]:
for feature in ['Age', 'Cabin', 'Embarked']: plot(train_df, feature, 'Survived').show()