Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates to xgboost tutorial #10

Merged
merged 2 commits into from
Aug 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@
}
},
"source": [
"## 4. Data Preprocessing\n",
"### 4. Data Preprocessing\n",
"#### 4.1. Overview of the USGS Stream Station\n",
"- The dataset that we will use provides the data for seven GSL watershed stations. \n",
"- The dataset contains climate variables, such as precipitation and temperature, water infrastructure, storage percentage, and watershed characteristics, such as average area and elevation. \n",
Expand Down Expand Up @@ -486,7 +486,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Model Development \n",
"### 5. Model Development \n",
"#### 5.1. Defining the XGBoost Model \n",
"As mentioned, we will use XGBoost in our tutorial, and we will use the [dmlc XGBoost package](https://xgboost.readthedocs.io/en/stable/). Understanding and tuning the model parameters is critical in any ML model development since it will affect the final model performance. The XGBoost model has different parameters, and here, we will work on the three most important parameters of XGBoost:\n",
" \n",
Expand Down Expand Up @@ -609,7 +609,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"#### !!!! Don't forget to train and save your model after tuning the hyperparameters as a Pickle file.\n"
"***!!!! Don't forget to train and save your model after tuning the hyperparameters as a Pickle file.***\n"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5. Model Development Continued\n",
"#### 5.2. Scaling the Data\n",
"Generally, scaling the inputs is not required in decision-tree ensemble models. However, some studies suggest scaling the inputs since XGBoost uses the Gradient Decent algorithm in its core optimization. So here we will try both \n",
"scaled and unscaled inputs to see the difference.\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5. Model Development Continued\n",
"#### 5.5. Testing the Model\n",
"We will give the model the test set for each station and compare it with the observation to evaluate the model with a dataset it has not seen before. Before feeding the test data we load the model. "
]
Expand Down
2 changes: 1 addition & 1 deletion book/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Below you'll find a table keeping track of all tutorials presented at this event

| Tutorial | Topics | Datasets | Recording Link |
| - | - | - | - |
| [Machine Learning for Post-Processing NWM Data](./decision_trees/01.script/01.tutorial_post_processing_xgboost_tuning.ipynb) | Decision trees and XGBoost | n/a | Not recorded |
| [Machine Learning for Post-Processing NWM Data](./decision_trees/01.script/00.tutorial_post_processing_xgboost_intro.md) | Decision trees and XGBoost | n/a | Not recorded |
Loading