MLP vs. Gradient Boosting

Evaluating Neural Networks vs. Gradient Boosting on Regression and Classification Tasks

This project evaluates different Multi-Layer Perceptron (MLP) architectures and compares them with Gradient Boosting models for regression and classification tasks.
Placeholder Image

Project Breakdown

The project workflow consists of the following steps: Data Preprocessing, Building & Evaluating MLP Models, Comparing with Gradient Boosting Regressor, Data Preprocessing, Building & Evaluating MLP Models, Comparing with Gradient Boosting Classifier.

Through a series of experiments, we optimize model structures, test different MLP depths, and compare their performance against Gradient Boosting to determine the most effective approach.

Primary Goal

Evaluate MLP architectures for both regression and classification.

Laptops

Secondary Goal

Compare the best Neural Networks structure with Gradient Boosting.

Laptops

Optimize Performance Metrics

Use Mean Squared Error (MSE) and R² score for Regression.

Use Accuracy and Mean Squared Error (MSE) for Classification.

Laptops

Key Findings

Gradient Boosting performed better in both Regression and Classification applications. Multi Layer Perceptrons (MLP) can be effective, but structured tabular data often favors Gradient Boosting methods.

Performance

Gradient Boosting Regressor outperformed MLP for regression.

Laptops

Efficiency

Both MLP and Gradient Boosting performed well in classification, but GBC was slightly better.

Laptops

Adaptation

Tuning deep networks is challenging, whereas Gradient Boosting adapts more efficiently.

Laptops
© 2025 | All rights reserved
built with
Pixelesq Logo
pixelesq