154

GitHub - JosPolfliet/pandas-profiling: Create HTML profiling reports from pandas...

 6 years ago
source link: https://github.com/JosPolfliet/pandas-profiling
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

pandas-profiling

Pandas Profiling Logo Header

Documentation | Slack | Stack Overflow | Latest changelog

pandas-profiling generates profile reports from a pandas DataFrame. The pandas df.describe() function is handy yet a little basic for exploratory data analysis. pandas-profiling extends pandas DataFrame with df.profile_report(), which automatically generates a standardized univariate and multivariate report for data understanding.

For each column, the following information (whenever relevant for the column type) is presented in an interactive HTML report:

  • Type inference: detect the types of columns in a DataFrame
  • Essentials: type, unique values, indication of missing values
  • Quantile statistics: minimum value, Q1, median, Q3, maximum, range, interquartile range
  • Descriptive statistics: mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness
  • Most frequent and extreme values
  • Histograms: categorical and numerical
  • Correlations: high correlation warnings, based on different correlation metrics (Spearman, Pearson, Kendall, Cramér’s V, Phik)
  • Missing values: through counts, matrix, heatmap and dendrograms
  • Duplicate rows: list of the most common duplicated rows
  • Text analysis: most common categories (uppercase, lowercase, separator), scripts (Latin, Cyrillic) and blocks (ASCII, Cyrilic)
  • File and Image analysis: file sizes, creation dates, dimensions, indication of truncated images and existance of EXIF metadata

The report contains three additional sections:

  • Overview: mostly global details about the dataset (number of records, number of variables, overall missigness and duplicates, memory footprint)
  • Alerts: a comprehensive and automatic list of potential data quality issues (high correlation, skewness, uniformity, zeros, missing values, constant values, between others)
  • Reproduction: technical details about the analysis (time, version and configuration)

zap Looking for a Spark backend to profile large datasets? It's work in progress.

hourglass Interested in uncovering temporal patterns? Check out popmon.

arrow_forward Quickstart

Start by loading your pandas DataFrame as you normally would, e.g. by using:

import numpy as np
import pandas as pd
from pandas_profiling import ProfileReport

df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"])

To generate the standard profiling report, merely run:

profile = ProfileReport(df, title="Pandas Profiling Report")

Using inside Jupyter Notebooks

There are two interfaces to consume the report inside a Jupyter notebook: through widgets and through an embedded HTML report.

The above is achieved by simply displaying the report as a set of widgets. In a Jupyter Notebook, run:

profile.to_widgets()

The HTML report can be directly embedded in a cell in a similar fashion:

profile.to_notebook_iframe()

Exporting the report to a file

To generate a HTML report file, save the ProfileReport to an object and use the to_file() function:

profile.to_file("your_report.html")

Alternatively, the report's data can be obtained as a JSON file:

# As a JSON string
json_data = profile.to_json()

# As a file
profile.to_file("your_report.json")

Using in the command line

For standard formatted CSV files (which can be read directly by pandas without additional settings), the pandas_profiling executable can be used in the command line. The example below generates a report named Example Profiling Report, using a configuration file called default.yaml, in the file report.html by processing a data.csv dataset.

pandas_profiling --title "Example Profiling Report" --config_file default.yaml data.csv report.html

Additional details on the CLI are available on the documentation.

eyes Examples

The following example reports showcase the potentialities of the package across a wide range of dataset and data types:

hammer_and_wrench Installation

Additional details, including information about widget support, are available on the documentation.

Using pip

You can install using the pip package manager by running:

pip install -U pandas-profiling[notebook]

Using conda

You can install using the conda package manager by running:

conda install -c conda-forge pandas-profiling

From source (development)

Download the source code by cloning the repository or click on Download ZIP to download the latest stable version.

Install it by navigating to the proper directory and running:

python setup.py install

The profiling report is written in HTML and CSS, which means a modern browser is required.

You need Python 3 to run the package. Other dependencies can be found in the requirements files:

Filename Requirements
requirements.txt Package requirements
requirements-dev.txt Requirements for development
requirements-test.txt Requirements for testing
setup.py Requirements for widgets etc.

memo Use cases

The documentation includes guides, tips and tricks for tackling commmon use cases:

Use case Description
Profiling large datasets Tips on how to prepare data and configure pandas-profiling for working with large datasets
Handling sensitive data Generating reports which are mindful about sensitive data in the input dataset
Dataset metadata and data dictionaries Complementing the report with dataset details and column-specific data dictionaries
Customizing the report's appearance Changing the appearance of the report's page and of the contained visualizations

link Integrations

To maximize its usefulness in real world contexts, pandas-profiling has a set of implicit and explicit integrations with a variety of other actors in the Data Science ecosystem:

Integration type Description
Other DataFrame libraries How to compute the profiling of data stored in libraries other than pandas
Great Expectations Generating Great Expectations expectations suites directly from a profiling report
Interactive applications Embedding profiling reports in Streamlit, Dash or Panel applications
Pipelines Integration with DAG workflow execution tools like Airflow or Kedro
Cloud services Using pandas-profiling in hosted computation services like Lambda, Google Cloud or Kaggle
IDEs Using pandas-profiling directly from integrated development environments such as PyCharm

raising_hand Support

Need help? Want to share a perspective? Report a bug? Ideas for collaborations? Reach out via the following channels:

  • Stack Overflow: ideal for asking questions on how to use the package
  • GitHub Issues: bugs, proposals for changes, feature requests
  • Slack: general chat, questions, collaborations
  • Email: project collaborations or sponsoring

exclamation Before reporting an issue on GitHub, check out Common Issues.

🤝🏽 Contributing

Learn how to get involved in the Contribution Guide.

A low-threshold place to ask questions or start contributing is the Data Centric AI Community's Slack.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK