Skip to content

Commit

Permalink
RC 1.0.4 (#884)
Browse files Browse the repository at this point in the history
* doc and version update

* stop roxygen from making $[0, 1]$ to a link

* stop roxygen from making $[0, 1]$ to a link
  • Loading branch information
topepo authored Feb 23, 2023
1 parent a482442 commit dda22fe
Show file tree
Hide file tree
Showing 15 changed files with 61 additions and 57 deletions.
12 changes: 6 additions & 6 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Package: parsnip
Title: A Common API to Modeling and Analysis Functions
Version: 1.0.3.9003
Version: 1.0.4
Authors@R: c(
person("Max", "Kuhn", , "max@rstudio.com", role = c("aut", "cre")),
person("Davis", "Vaughan", , "davis@rstudio.com", role = "aut"),
person("Max", "Kuhn", , "max@posit.co", role = c("aut", "cre")),
person("Davis", "Vaughan", , "davis@posit.co", role = "aut"),
person("Emil", "Hvitfeldt", , "[email protected]", role = "ctb"),
person("RStudio", role = c("cph", "fnd"))
person("Posit Software PBC", role = c("cph", "fnd"))
)
Maintainer: Max Kuhn <max@rstudio.com>
Maintainer: Max Kuhn <max@posit.co>
Description: A common interface is provided to allow users to specify a
model without having to remember the different argument names across
different functions or computational engines (e.g. 'R', 'Spark',
Expand Down Expand Up @@ -76,4 +76,4 @@ Config/testthat/edition: 3
Encoding: UTF-8
LazyData: true
Roxygen: list(markdown = TRUE)
RoxygenNote: 7.2.3
RoxygenNote: 7.2.3.9000
2 changes: 1 addition & 1 deletion NEWS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# parsnip (development version)
# parsnip 1.0.4

* For censored regression models, a "reverse Kaplan-Meier" curve is computed for the censoring distribution. This can be used when evaluating this type of model (#855).

Expand Down
50 changes: 25 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,22 +77,22 @@ between implementations.

In this example:

- the **type** of model is “random forest”,
- the **mode** of the model is “regression” (as opposed to
classification, etc), and
- the computational **engine** is the name of the R package.
- the **type** of model is “random forest”,
- the **mode** of the model is “regression” (as opposed to
classification, etc), and
- the computational **engine** is the name of the R package.

The goals of parsnip are to:

- Separate the definition of a model from its evaluation.
- Decouple the model specification from the implementation (whether
the implementation is in R, spark, or something else). For example,
the user would call `rand_forest` instead of `ranger::ranger` or
other specific packages.
- Harmonize argument names (e.g. `n.trees`, `ntrees`, `trees`) so that
users only need to remember a single name. This will help *across*
model types too so that `trees` will be the same argument across
random forest as well as boosting or bagging.
- Separate the definition of a model from its evaluation.
- Decouple the model specification from the implementation (whether the
implementation is in R, spark, or something else). For example, the
user would call `rand_forest` instead of `ranger::ranger` or other
specific packages.
- Harmonize argument names (e.g. `n.trees`, `ntrees`, `trees`) so that
users only need to remember a single name. This will help *across*
model types too so that `trees` will be the same argument across
random forest as well as boosting or bagging.

Using the example above, the parsnip approach would be:

Expand Down Expand Up @@ -166,18 +166,18 @@ This project is released with a [Contributor Code of
Conduct](https://contributor-covenant.org/version/2/0/CODE_OF_CONDUCT.html).
By contributing to this project, you agree to abide by its terms.

- For questions and discussions about tidymodels packages, modeling,
and machine learning, please [post on RStudio
Community](https://community.rstudio.com/new-topic?category_id=15&tags=tidymodels,question).
- For questions and discussions about tidymodels packages, modeling, and
machine learning, please [post on RStudio
Community](https://community.rstudio.com/new-topic?category_id=15&tags=tidymodels,question).

- If you think you have encountered a bug, please [submit an
issue](https://github.com/tidymodels/parsnip/issues).
- If you think you have encountered a bug, please [submit an
issue](https://github.com/tidymodels/parsnip/issues).

- Either way, learn how to create and share a
[reprex](https://reprex.tidyverse.org/articles/articles/learn-reprex.html)
(a minimal, reproducible example), to clearly communicate about your
code.
- Either way, learn how to create and share a
[reprex](https://reprex.tidyverse.org/articles/articles/learn-reprex.html)
(a minimal, reproducible example), to clearly communicate about your
code.

- Check out further details on [contributing guidelines for tidymodels
packages](https://www.tidymodels.org/contribute/) and [how to get
help](https://www.tidymodels.org/help/).
- Check out further details on [contributing guidelines for tidymodels
packages](https://www.tidymodels.org/contribute/) and [how to get
help](https://www.tidymodels.org/help/).
4 changes: 2 additions & 2 deletions man/details_boost_tree_h2o.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/details_boost_tree_lightgbm.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions man/details_boost_tree_xgboost.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/details_rule_fit_xrf.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 3 additions & 3 deletions man/parsnip-package.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/rmd/boost_tree_h2o.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,11 +118,11 @@ Non-numeric predictors (i.e., factors) are internally converted to numeric. In t

The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models.

Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data.
Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data.

parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion.

`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$.
`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`.

## Initializing h2o

Expand Down
4 changes: 2 additions & 2 deletions man/rmd/boost_tree_lightgbm.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,11 +115,11 @@ Non-numeric predictors (i.e., factors) are internally converted to numeric. In t

The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models.

Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data.
Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data.

parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion.

`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$.
`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`.

### Saving fitted model objects

Expand Down
8 changes: 4 additions & 4 deletions man/rmd/boost_tree_xgboost.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ boost_tree() %>%
```

```
## Boosted Tree Model Specification (unknown)
## Boosted Tree Model Specification (unknown mode)
##
## Engine-Specific Arguments:
## eval_metric = mae
Expand All @@ -139,7 +139,7 @@ boost_tree() %>%
```

```
## Boosted Tree Model Specification (unknown)
## Boosted Tree Model Specification (unknown mode)
##
## Engine-Specific Arguments:
## params = list(eval_metric = "mae")
Expand All @@ -162,11 +162,11 @@ By default, the model is trained without parallel processing. This can be change

The `mtry` argument denotes the number of predictors that will be randomly sampled at each split when creating tree models.

Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that $[0, 1]$ is always a valid range for that parameter, regardless of input data.
Some engines, such as `"xgboost"`, `"xrf"`, and `"lightgbm"`, interpret their analogue to the `mtry` argument as the _proportion_ of predictors that will be randomly sampled at each split rather than the _count_. In some settings, such as when tuning over preprocessors that influence the number of predictors, this parameterization is quite helpful---interpreting `mtry` as a proportion means that `[0, 1]` is always a valid range for that parameter, regardless of input data.

parsnip and its extensions accommodate this parameterization using the `counts` argument: a logical indicating whether `mtry` should be interpreted as the number of predictors that will be randomly sampled at each split. `TRUE` indicates that `mtry` will be interpreted in its sense as a count, `FALSE` indicates that the argument will be interpreted in its sense as a proportion.

`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within $[0, 1]$.
`mtry` is a main model argument for \\code{\\link[=boost_tree]{boost_tree()}} and \\code{\\link[=rand_forest]{rand_forest()}}, and thus should not have an engine-specific interface. So, regardless of engine, `counts` defaults to `TRUE`. For engines that support the proportion interpretation (currently `"xgboost"` and `"xrf"`, via the rules package, and `"lightgbm"` via the bonsai package) the user can pass the `counts = FALSE` argument to `set_engine()` to supply `mtry` values within `[0, 1]`.

### Early stopping

Expand Down
2 changes: 2 additions & 0 deletions man/rmd/discrim_regularized_klaR.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ For this engine, there is a single mode: classification
## Tuning Parameters




This model has 2 tuning parameter:

- `frac_common_cov`: Fraction of the Common Covariance Matrix (type: double, default: (see below))
Expand Down
2 changes: 2 additions & 0 deletions man/rmd/mlp_h2o.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@ For this engine, there are multiple modes: classification and regression

## Tuning Parameters



This model has 6 tuning parameters:

- `hidden_units`: # Hidden Units (type: integer, default: 200L)
Expand Down
Loading

0 comments on commit dda22fe

Please sign in to comment.