|
1361 | 1361 | "\n",
|
1362 | 1362 | "With practice and running many different experiments, you'll start to build an intuition of what *might* help your model.\n",
|
1363 | 1363 | "\n",
|
1364 |
| - "I say *might* on purpose because there's no guarantees.\n", |
| 1364 | + "I say *might* on purpose because there's no guarantee.\n", |
1365 | 1365 | "\n",
|
1366 | 1366 | "But generally, in light of [*The Bitter Lesson*](http://www.incompleteideas.net/IncIdeas/BitterLesson.html) (I've mentioned this twice now because it's an important essay in the world of AI), generally the bigger your model (more learnable parameters) and the more data you have (more opportunities to learn), the better the performance.\n",
|
1367 | 1367 | "\n",
|
|
1692 | 1692 | "\n",
|
1693 | 1693 | "# Create an EffNetB0 feature extractor\n",
|
1694 | 1694 | "def create_effnetb0():\n",
|
1695 |
| - " # 1. Get the base mdoel with pretrained weights and send to target device\n", |
| 1695 | + " # 1. Get the base model with pretrained weights and send to target device\n", |
1696 | 1696 | " weights = torchvision.models.EfficientNet_B0_Weights.DEFAULT\n",
|
1697 | 1697 | " model = torchvision.models.efficientnet_b0(weights=weights).to(device)\n",
|
1698 | 1698 | "\n",
|
|
2417 | 2417 | "cell_type": "markdown",
|
2418 | 2418 | "metadata": {},
|
2419 | 2419 | "source": [
|
2420 |
| - "Looks like our best model so far is 29 MB in size. We'll keep this in mind if we wanted to deploy it later on.\n", |
| 2420 | + "Looks like our best model so far is 29 MB in size. We'll keep this in mind if we want to deploy it later on.\n", |
2421 | 2421 | "\n",
|
2422 | 2422 | "Time to make and visualize some predictions.\n",
|
2423 | 2423 | "\n",
|
|
2595 | 2595 | "\n",
|
2596 | 2596 | "The main ideas you should take away from this Milestone Project 1 are:\n",
|
2597 | 2597 | "\n",
|
2598 |
| - "* The machine learning practioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n", |
| 2598 | + "* The machine learning practitioner's motto: *experiment, experiment, experiment!* (though we've been doing plenty of this already).\n", |
2599 | 2599 | "* In the beginning, keep your experiments small so you can work fast, your first few experiments shouldn't take more than a few seconds to a few minutes to run.\n",
|
2600 | 2600 | "* The more experiments you do, the quicker you can figure out what *doesn't* work.\n",
|
2601 | 2601 | "* Scale up when you find something that works. For example, since we've found a pretty good performing model with EffNetB2 as a feature extractor, perhaps you'd now like to see what happens when you scale it up to the whole [Food101 dataset](https://pytorch.org/vision/main/generated/torchvision.datasets.Food101.html) from `torchvision.datasets`.\n",
|
|
2666 | 2666 | "NUM_WORKERS = os.cpu_count() # use maximum number of CPUs for workers to load data \n",
|
2667 | 2667 | "\n",
|
2668 | 2668 | "# Note: this is an update version of data_setup.create_dataloaders to handle\n",
|
2669 |
| - "# differnt train and test transforms.\n", |
| 2669 | + "# different train and test transforms.\n", |
2670 | 2670 | "def create_dataloaders(\n",
|
2671 | 2671 | " train_dir, \n",
|
2672 | 2672 | " test_dir, \n",
|
|
0 commit comments