First, thank you for contributing to Vector! The goal of this document is to provide everything you need to start contributing to Vector. The following TOC is sorted progressively, starting with the basics and expanding into more specifics.
- Introduction
- Your First Contribution
- Change Control
- Development
- Humans
- Security
- Legal
- FAQ
- Contact
- You're familiar with Github and the pull request workflow.
- You've read Vector's docs.
- You know about the Vector community. Please use this for help.
- Ensure your change has an issue! Find an
existing issue or open a new issue.
- This is where you can get a feel if the change will be accepted or not.
Changes that are questionable will have a
needs: approval
label.
- This is where you can get a feel if the change will be accepted or not.
Changes that are questionable will have a
- Once approved, fork the Vector repository in your own Github account.
- Create a new Git branch.
- Review the Vector change control and development workflows.
- Make your changes.
- Submit the branch as a pull request to the main Vector repo. A Vector team member should comment and/or review your pull request within a few days. Although, depending on the circumstances, it may take longer.
If you're contributing a new source, sink, or transform to Vector, thank you that's way cool! There's a few steps you need think about if you want to make sure we can merge your contribution. We're here to help you along with these steps but they are a blocker to getting a new integration released.
To merge a new source, sink, or transform, you need to:
- Add tests, especially integration tests if your contribution connects to an external service.
- Add instrumentation so folks using your integration can get insight into how it's working and performing. You can see some example of instrumentation in existing integrations.
- Add documentation. You can see examples in the
docs
directory. - Update
.github/CODEOWNERS
or talk to us about identifying someone on the team to help look after the new integration.
All changes must be made in a branch and submitted as pull requests. Vector does not adopt any type of branch naming style, but please use something descriptive of your changes.
Please ensure your commits are small and focused; they should tell a story of your change. This helps reviewers to follow your changes, especially for more complex changes.
Your commits must include a DCO signature. This is simpler than it sounds; it just means that all of your commits must contain:
Signed-off-by: Joe Smith <[email protected]>
Git makes this easy by adding the -s
or --signoff
flags when you commit:
git commit -sm 'My commit message'
We also included a make signoff
target that handles this for you if
you forget.
Once your changes are ready you must submit your branch as a pull
request.
The pull request title must follow the format outlined in the conventional
commits spec.
Conventional commits is a standardized
format for commit messages. Vector only requires this format for commits on
the master
branch. And because Vector squashes commits before merging
branches, this means that only the pull request title must conform to this
format. Vector performs a pull request check to verify the pull request title
in case you forget.
A list of allowed sub-categories is defined here.
The following are all good examples of pull request titles:
feat(new sink): new `xyz` sink
feat(tcp source): add foo bar baz feature
fix(tcp source): fix foo bar baz bug
chore: improve build process
docs: fix typos
All pull requests should be reviewed by:
- No review required for cosmetic changes like whitespace, typos, and spelling by a maintainer
- One Vector team member for minor changes or trivial changes from contributors
- Two Vector team members for major changes
- Three Vector team members for RFCs
If there are any CODEOWNERs automatically assigned, you should also wait for their review.
All pull requests are squashed and merged. We generally discourage large pull requests that are over 300-500 lines of diff. If you would like to propose a change that is larger we suggest coming onto our Discord server and discuss it with one of our engineers. This way we can talk through the solution and discuss if a change that large is even needed! This will produce a quicker response to the change and likely produce code that aligns better with our process.
Currently Vector uses Github Actions to run tests. The workflows are defined in
.github/workflows
.
Github Actions is responsible for releasing updated versions of Vector through various channels.
Tests are run for all changes except those that have the label:
ci-condition: skip
Some long running tests are only run daily, rather than on every pull request. If needed, an administrator can kick off these tests manually via the button on the nightly build action page
Historically, we've had some trouble with tests being flakey. If your PR does not have passing tests:
- Ensure that the test failures are unrelated to your change
- Is it failing on master?
- Does it fail if you rerun CI?
- Can you reproduce locally?
- Find or open an issue for the test failure (example)
- Link the PR in the issue for the failing test so that there are more examples
You can invoke the test harness by commenting on any pull request with:
/test -t <name>
We're super excited to have you interested in working on Vector! Before you start you should pick how you want to develop.
For small or first-time contributions, we recommend the Docker method. Prefer to do it yourself? That's fine too!
Targets: You can use this method to produce AARCH64, Arm6/7, as well as x86/64 Linux builds.
Since not everyone has a full working native environment, we took our environment and stuffed it into a Docker (or Podman) container!
This is ideal for users who want it to "Just work" and just want to start contributing. It's also what we use for our CI, so you know if it breaks we can't do anything else until we fix it. 😉
Before you go farther, install Docker or Podman through your official package manager, or from the Docker or Podman sites.
# Optional: Only if you use `podman`
export CONTAINER_TOOL="podman"
By default, make environment
style tasks will do a docker pull
from Github's container repository, you can optionally build your own environment while you make your morning coffee ☕:
# Optional: Only if you want to go make a coffee
make environment-prepare
Now that you have your coffee, you can enter the shell!
# Enter a shell with optimized mounts for interactive processes.
# Inside here, you can use Vector like you have full toolchain (See below!)
make environment
# Try out a specific container tool. (Docker/Podman)
make environment CONTAINER_TOOL="podman"
# Add extra cli opts
make environment CLI_OPTS="--publish 3000:2000"
Now you can use the jobs detailed in "Bring your own toolbox" below.
Want to run from outside of the environment? Clever. Good thinking. You can run any of the following:
# Validate your code can compile
make check ENVIRONMENT=true
# Validate your code actually does compile (in dev mode)
make build-dev ENVIRONMENT=true
# Validate your test pass
make test SCOPE="sources::example" ENVIRONMENT=true
# Validate tests (that do not require other services) pass
make test ENVIRONMENT=true
# Validate your tests pass (starting required services in Docker)
make test-integration SCOPE="sources::example" ENVIRONMENT=true
# Validate your tests pass against a live service.
make test-integration SCOPE="sources::example" AUTOSPAWN=false ENVIRONMENT=true
# Validate all tests pass (starting required services in Docker)
make test-integration ENVIRONMENT=true
# Run your benchmarks
make bench SCOPE="transforms::example" ENVIRONMENT=true
# Format your code before pushing!
make fmt ENVIRONMENT=true
We use explicit environment opt-in as many contributors choose to keep their Rust toolchain local.
Targets: This option is required for MSVC/Mac/FreeBSD toolchains. It can be used to build for any environment or OS.
To build Vector on your own host will require a fairly complete development environment!
Loosely, you'll need the following:
- To build Vector: Have working Rustup, Protobuf tools, C++/C build tools (LLVM, GCC, or MSVC), Python, and Perl,
make
(the GNU one preferably),bash
,cmake
, andautotools
. - To run integration tests: Have
docker
available, or a real live version of that service. (UseAUTOSPAWN=false
) - To run
make check-component-features
: Haveremarshal
installed.
If you find yourself needing to run something inside the Docker environment described above, that's totally fine, they won't collide or hurt each other. In this case, you'd just run make environment-generate
.
We're interested in reducing our dependencies if simple options exist. Got an idea? Try it out, we'd to hear of your successes and failures!
In order to do your development on Vector, you'll primarily use a few commands, such as cargo
and make
tasks you can use ordered from most to least frequently run:
# Validate your code can compile
cargo check
make check
# Validate your code actually does compile (in dev mode)
cargo build
make build-dev
# Validate your test pass
cargo test sources::example
make test scope="sources::example"
# Validate tests (that do not require other services) pass
cargo test
make test
# Validate your tests pass (starting required services in Docker)
make test-integration scope="sources::example" autospawn=false
# Validate your tests pass against a live service.
make test-integration scope="sources::example" autospawn=false
cargo test --features docker sources::example
# Validate all tests pass (starting required services in Docker)
make test-integration
# Run your benchmarks
make bench scope="transforms::example"
cargo bench transforms::example
# Format your code before pushing!
make fmt
cargo fmt
If you run make
you'll see a full list of all our tasks. Some of these will start Docker containers, sign commits, or even make releases. These are not common development commands and your mileage may vary.
/benches
- Internal benchmarks./config
- Public facing Vector config, included in releases./distribution
- Distribution artifacts for various targets./docs
- Structured data used to generate documentation./lib
- External libraries that do not depend onvector
but are used within the project./proto
- Protobuf definitions./scripts
- Scripts used to generate docs and maintain the repo./src
- Vector source./tests
- Various high-level test cases.
Vector includes a Makefile
in the root of the repo. This serves
as a high-level interface for common commands. Running make
will produce
a list of make targets with descriptions. These targets will be referenced
throughout this document.
We use rustfmt
on stable
to format our code and CI will verify that your
code follows
this format style. To run the following command make sure rustfmt
has been
installed on the stable toolchain locally.
# To install rustfmt
rustup component add rustfmt
# To format the code
make fmt
- Always use the Tracing crate's key/value style for log events.
- Events should be capitalized and end with a period,
.
. - Never use
e
orerr
- always spell outerror
to enrich logs and make it clear what the output is. - Prefer Display over Debug,
%error
and not?error
.
Nope!
warn!("Failed to merge value: {}.", err);
Yep!
warn!(message = "Failed to merge value.", %error);
When a new component (a source, transform, or sink) is added, it has to be put
behind a feature flag with the corresponding name. This ensures that it is
possible to customize Vector builds. See the features
section in Cargo.toml
for examples.
In addition, during development of a particular component it is useful to
disable all other components to speed up compilation. For example, it is
possible to build and run tests only for console
sink using
cargo test --lib --no-default-features --features sinks-console sinks::console
In case if the tests are already built and only the component file changed, it is around 4 times faster than rebuilding tests with all features.
Dependencies should be carefully selected and avoided if possible. You can see how dependencies are reviewed in the Reviewing guide.
If a dependency is required only by one or multiple components, but not by
Vector's core, make it optional and add it to the list of dependencies of
the features corresponding to these components in Cargo.toml
.
Sinks may implement a health check as a means for validating their configuration against the environment and external systems. Ideally, this allows the system to inform users of problems such as insufficient credentials, unreachable endpoints, non-existent tables, etc. They're not perfect, however, since it's impossible to exhaustively check for issues that may happen at runtime.
When implementing health checks, we prefer false positives to false negatives. This means we would prefer that a health check pass and the sink then fail than to have the health check fail when the sink would have been able to run successfully.
A common cause of false negatives in health checks is performing an operation that the sink itself does not need. For example, listing all of the available S3 buckets and checking that the configured bucket is on that list. The S3 sink doesn't need the ability to list all buckets, and a user that knows that may not have permitted it to do so. In that case, the health check will fail due to bad credentials even through its credentials are sufficient for normal operation.
This leads to a general strategy of mimicking what the sink itself does. Unfortunately, the fact that health checks don't have real events available to them leads to some limitations here. The most obvious example of this is with sinks where the exact target of a write depends on the value of some field in the event (e.g. an interpolated Kinesis stream name). It also pops up for sinks where incoming events are expected to conform to a specific schema. In both cases, random test data is reasonably likely to trigger a potentially false-negative result. Even in simpler cases, we need to think about the effects of writing test data and whether the user would find that surprising or invasive. The answer usually depends on the system we're interfacing with.
In some cases, like the Kinesis example above, the right thing to do might be nothing at all. If we require dynamic information to figure out what entity (i.e. Kinesis stream in this case) that we're even dealing with, odds are very low that we'll be able to come up with a way to meaningfully validate that it's in working order. It's perfectly valid to have a health check that falls back to doing nothing when there is a data dependency like this.
With all that in mind, here is a simple checklist to go over when writing a new health check:
- Does this check perform different fallible operations from the sink itself?
- Does this check have side effects the user would consider undesirable (e.g. data pollution)?
- Are there situations where this check would fail but the sink would operate normally?
Not all of the answers need to be a hard "no", but we should think about the likelihood that any "yes" would lead to false negatives and balance that against the usefulness of the check as a whole for finding problems. Because we have the option to disable individual health checks, there's an escape hatch for users that fall into a false negative circumstance. Our goal should be to minimize the likelihood of users needing to pull that lever while still making a good effort to detect common problems.
For metrics naming, Vector broadly follows the Prometheus metric naming standards. Hence, a metric name:
-
Must only contain valid characters, which are ASCII letters and digits, as well as underscores. It should match the regular expression:
[a-z_][a-z0-9_]*
. -
Metrics have a broad template:
<namespace>_<name>_<unit>_[total]
- The
namespace
is a single word prefix that groups metrics from a specific source, for example host-based metrics like CPU, disk, and memory are prefixed withhost
, Apache metrics are prefixed withapache
, etc. - The
name
describes what the metric measures. - The
unit
is a single base unit, for example seconds, bytes, metrics. - The suffix should describe the unit in plural form: seconds, bytes. Accumulating counts, both with units or without, should end in
total
, for exampledisk_written_bytes_total
andhttp_requests_total
.
- The
-
Where required, use tags to differentiate the characteristic of the measurement. For example, whilst
host_cpu_seconds_total
is name of the metric, we also record themode
that is being used for each CPU. Themode
and the specific CPU then become tags on the metric:
host_cpu_seconds_total{cpu="0",mode="idle"}
host_cpu_seconds_total{cpu="0",mode="idle"}
host_cpu_seconds_total{cpu="0",mode="nice"}
host_cpu_seconds_total{cpu="0",mode="system"}
host_cpu_seconds_total{cpu="0",mode="user"}
host_cpu_seconds_total
When naming options for sinks, sources, and transforms it's important to keep in mind these guidelines:
- Suffix options with their unit. Ex:
_seconds
,_bytes
, etc. - Don't repeat the name space in the option name, ex.
fingerprinting.fingerprint_bytes
. - Normalize around time units where relevant and possible, for example using seconds consistently rather than seconds and milliseconds.
- Use nouns as category names, for example
fingerprint
instead offingerprinting
.
Testing is very important since Vector's primary design principle is reliability. You can read more about how Vector tests in our testing blog post.
Unit tests refer to the majority of inline tests throughout Vector's code. A defining characteristic of unit tests is that they do not require external services to run, therfore they should be much quicker. You can run them with:
cargo test
Integration tests verify that Vector actually works with the services it integrates with. Unlike unit tests, integration tests require external services to run. A few rules when setting up integration tests:
- To ensure all contributors can run integration tests, the service must run in a Docker container.
- The service must be configured on a unique port that is configured through an environment variable.
- Add a
test-integration-<name>
to Vector'sMakefile
and ensure that it starts the service before running the integration test. - Add a
test-integration-<name>
job to Vector's.github/workflows/test.yml
workflow and call your make target accordingly.
Once complete, you can run your integration tests with:
make test-integration-<name>
Vector also offers blackbox testing via Vector's test harness. This is a complex testing suite that tests Vector's performance in real-world environments. It is typically used for benchmarking, but also correctness testing.
You can run these tests within a PR as described in the CI section.
If you are developing a particular component and want to quickly iterate on unit tests related only to this component, the following approach can reduce waiting times:
-
Install cargo-watch.
-
(Only for GNU/Linux) Install LLVM 9 (for example, package
llvm-9
on Debian) and setRUSTFLAGS
environment variable to uselld
as the linker:export RUSTFLAGS='-Clinker=clang-9 -Clink-arg=-fuse-ld=lld'
-
Run in the root directory of Vector's source
cargo watch -s clear -s \ 'cargo test --lib --no-default-features --features=<component type>-<component name> <component type>::<component name>'
For example, if the component is
add_fields
transform, the command above turns intocargo watch -s clear -s \ 'cargo test --lib --no-default-features --features=transforms-add_fields transforms::add_fields'
We use flog
to build a sample set of log files to test sending logs from a
file. This can be done with the following commands on mac with homebrew.
Installation instruction for flog can be found
here.
flog --bytes $((100 * 1024 * 1024)) > sample.log
This will create a 100MiB
sample log file in the sample.log
file.
All benchmarks are placed in the /benches
folder. You can
run benchmarks via the make bench
command. In addition, Vector
maintains a full test harness for complex
end-to-end integration and performance testing.
If you're trying to improve Vector's performance (or understand why your change made it worse), profiling is a useful tool for seeing where time is being spent.
While there are a bunch of useful profiling tools, a simple place to get started
is with Linux's perf
. Before getting started, you'll likely need to give
yourself access to collect stats:
echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid
You'll also want to edit Cargo.toml
and make sure that Vector is being built
with debug symbols in release mode. This ensures that you'll get human-readable
info in the eventual output:
[profile.release]
debug = true
Then you can start up a release build of Vector with whatever config you're interested in profiling.
cargo run --release -- --config my_test_config.toml
Once it's started, use the ps
tool (or equivalent) to make a note of its PID.
We'll use this to tell perf
which process we would like it to collect data
about.
The next step is somewhat dependent on the config you're testing. For this
example, let's assume you're using a simple TCP-mode socket source listening on
port 9000. Let's also assume that you have a large file of example input in
access.log
(you can use a tool like flog
to generate this).
With all that prepared, we can send our test input to Vector and collect data while it is under load:
perf record -F99 --call-graph dwarf -p $VECTOR_PID socat -dd OPEN:access.log TCP:localhost:9000
This instructs perf
to collect data from our already-running Vector process
for the duration of the socat
command. The -F
argument is the frequency at
which perf
should sample the Vector call stack. Higher frequencies will
collect more data and produce more detailed output, but can produce enormous
amounts of data that take a very long time to process. Using -F99
works well
when your input data is large enough to take a minute or more to process, but
feel free to adjust both input size and sampling frequency for your setup.
It's worth noting that this is not the normal way to profile programs with
perf
. Usually you would simply run something like perf record my_program
and
not have to worry about PIDs and such. We differ from this because we're only
interested in data about what Vector is doing while under load. Running it
directly under perf
would collect data for the entire lifetime of the process,
including startup, shutdown, and idle time. By telling perf
to collect data
only while the load generation command is running we get a more focused dataset
and don't have to worry about timing different commands in quick succession.
You'll now find a perf.data
file in your current directory with all of the
information that was collected. There are different ways to process this, but
one of the most useful is to create
a flamegraph. For this we can
use the inferno
tool (available via cargo install
):
perf script | inferno-collapse-perf > stacks.folded
cat stacks.folded | inferno-flamegraph > flamegraph.svg
And that's it! You now have a flamegraph SVG file that can be opened and navigated in your favorite web browser.
There is a special flow for when you develop portions of Vector that are
designed to work with Kubernetes, like kubernetes_logs
source or the
deployment/kubernetes/*.yaml
configs.
This flow facilitates building Vector and deploying it into a cluster.
There are some extra requirements besides what you'd normally need to work on Vector:
linux
system (create an issue if you want to work with another OS and we'll help);skaffold
docker
kubectl
kustomize
minikube
-powered or other k8s clustercargo watch
Once you have the requirements, use the scripts/skaffold.sh dev
command.
That's it, just one command should take care of everything!
It will:
- build the
vector
binary in development mode, - build a docker image from this binary via
skaffold/docker/Dockerfile
, - deploy
vector
into the Kubernetes cluster at your current kubectl context using the built docker image and a mix of our production deployment configuration from thedistribution/kubernetes/*.yaml
and the special dev-flow configuration atskaffold/manifests/*.yaml
; seekustomization.yaml
for the exact specification.
As the result of invoking the scripts/skaffold.sh dev
, you should see
a skaffold
process running on your local machine, printing the logs from the
deployed vector
instance.
To stop the process, press Ctrl+C
, and wait for skaffold
to clean up
the cluster state and exit.
scripts/skaffold.sh
wraps skaffold
, you can use other skaffold
subcommands
if it fits you better.
You might need to tweak skaffold
, here are some hints:
-
skaffold
will try to detect whether a local cluster is used; if a local cluster is used,skaffold
won't push the docker images it builds to a registry. See this page for how you can troubleshoot and tweak this behavior. -
skaffold
can rewrite the image name so that you don't try to push a docker image to a repo that you don't have access to. See this page for more info. -
For the rest of the
skaffold
tweaks you might want to apply check out this page.
Is some cases skaffold
may not work. It's possible to go through the dev flow
manually, without skaffold
.
One of the important thing skaffold
does is it patches the configuration to
tie things together. If you want to go without it, you'll have to take care of
that yourself, thus some additional knowledge of Kubernetes inner workings is
required.
Essentially, the steps you have to take to deploy manually are the same that
skaffold
will perform, and they're outlined at the previous section.
Kubernetes integration has a lot of parts that can go wrong.
To cope with the complexity and ensure we maintain high quality, we use E2E (end-to-end) tests.
E2E tests normally run at CI, so there's typically no need to run them manually.
kubernetes
cluster (minikube
has special support, but any cluster should work)docker
kubectl
bash
Vector release artifacts are prepared for E2E tests, so the ability to do that is required too, see Vector docs for more details.
Note:
minikube
had a bug in the versions1.12.x
that affected our test process - see kubernetes/minikube#8799. Use version1.13.0+
that has this bug fixed.
Also:
Note:
minikube
has troubles running on ZFS systems. If you're using ZFS, we suggest using a cloud cluster orminik8s
with local registry.
To run the E2E tests, use the following command:
CONTAINER_IMAGE_REPO=<your name>/vector-test make test-e2e-kubernetes
Where CONTAINER_IMAGE_REPO
is the docker image repo name to use, without part
after the :
. Replace <your name>
with your Docker Hub username.
You can also pass additional parameters to adjust the behavior of the test:
-
QUICK_BUILD=true
- use development build and a skaffold image from the dev flow instead of a production docker image. Significantly speeds up the preparation process, but doesn't guarantee the correctness in the release build. Useful for development of the tests or Vector code to speed up the iteration cycles. -
USE_MINIKUBE_CACHE=true
- instead of pushing the built docker image to the registry under the specified name, directly load the image into aminikube
-controlled cluster node. Requires you to test against aminikube
cluster. Eliminates the need to have a registry to run tests. WhenUSE_MINIKUBE_CACHE=true
is set, we provide a default value for theCONTAINER_IMAGE_REPO
so it can be omitted. Can be set toauto
(default) to automatically detect whether to useminikube cache
or not, based on the currentkubectl
context. To opt-out, setUSE_MINIKUBE_CACHE=false
. -
CONTAINER_IMAGE=<your name>/vector-test:tag
- completely skip the step of building the Vector docker image, and use the specified image instead. Useful to speed up the iterations speed when you already have a Vector docker image you want to test against. -
SKIP_CONTAINER_IMAGE_PUBLISHING=true
- completely skip the image publishing step. Useful when you want to speed up the iteration speed and when you know the Vector image you want to test is already available to the cluster you're testing against. -
SCOPE
- pass a filter to thecargo test
command to filter out the tests, effectively equivalent tocargo test -- $SCOPE
.
Passing additional commands is done like so:
QUICK_BUILD=true USE_MINIKUBE_CACHE=true make test-e2e-kubernetes
or
QUICK_BUILD=true CONTAINER_IMAGE_REPO=<your name>/vector-test make test-e2e-kubernetes
Kubernetes integration architecture is largely inspired by the RFC 2221, so this is a concise outline of the effective design, rather than a deep dive into the concepts.
With kubernetes_logs
source, Vector connects to the Kubernetes API doing
a streaming watch request over the Pod
s executing on the same Node
that
Vector itself runs at. Once Vector gets the list of all the Pod
s that are
running on the Node
, it starts collecting logs for the logs files
corresponding to each of the Pod
. Only plaintext (as in non-gzipped) files
are taken into consideration.
The log files are then parsed into events, and the said events are annotated
with the metadata from the corresponding Pod
s, correlated via the file path
of the originating log file.
The events are then passed to the topology.
We use custom Kubernetes API client and machinery, that lives
at src/kubernetes
.
The kubernetes_logs
source lives at src/sources/kubernetes_logs
.
There is also an end-to-end (E2E) test framework that resides
at lib/k8s-test-framework
, and the actual end-to-end tests using that
framework are at lib/k8s-e2e-tests
.
The Kubernetes-related distribution bit that are at distribution/docker
,
distribution/kubernetes
and distribution/helm
.
There are also snapshot tests for Helm at tests/helm-snapshots
.
The development assistance resources are located at skaffold.yaml
and skaffold
dir.
After making your change, you'll want to prepare it for Vector's users (mostly humans). This usually entails updating documentation and announcing your feature.
Documentation is very important to the Vector project! The official
docs at https://vector.dev/docs are built using structured data written in
[CUE], a language designed for data templating and validation. All of Vector's
CUE sources are in the /docs
folder.
Vector is currently using CUE version 0.3.2. Be sure to use precisely this version, as CUE is evolving quickly and you can expect breaking changes in each release.
When the HTML output for the Vector docs is built, the vector
repo is cloned
(in another repo) and these CUE sources are converted into one big JSON object
using the cue export
command. That JSON is then used as an input to the site
build.
Vector has some CUE-related CI checks that are run whenever changes are made to
the docs
directory. This includes checks to make sure that the CUE sources are
properly formatted. To run CUE's autoformatting, run this command from the
vector
root:
cue fmt ./docs/**/*.cue
If that rewrites any files, make sure to commit your changes or else you'll see CI failures.
In addition to proper formatting, the CUE sources need to be valid, that is, the provided data needs to conform to various CUE schemas. To check the validity of the CUE sources:
make check-docs
A good practice for writing CUE is to make small, incremental changes and to frequently check to ensure that those changes are valid. If you introduce larger changes that introduce multiple errors, you may have difficulty interpreting CUE's verbose (and not always super helpful) log output. In fact, we recommend using a tool like [watchexec] to validate the sources every time you save a change:
# From the root
watchexec "make check-docs"
Developers do not need to maintain the Changelog
. This is
automatically generated via the make release
command. This is made possible
by the use of conventional commit titles.
It should offer meaningful value to users. This is inherently subjective and it is impossible to define exact rules for this distinction. But we should be cautious not to dilute the meaning of a highlight by producing low values highlights.
Highlights are not blog posts. They are short one, maybe two, paragraph announcements. Highlights should allude to, or link to, a blog post if relevant.
For example, this performance increase announcement is noteworthy, but also deserves an in-depth blog post covering the work that resulted in the performance benefit. Notice that the highlight alludes to an upcoming blog post. This allows us to communicate a high-value performance improvement without being blocked by an in-depth blog post.
Please see the SECURITY.md
file.
To protect all users of Vector, the following legal requirements are made. If you have additional questions, please contact us.
Vector requires all contributors to agree to the DCO. DCO stands for Developer Certificate of Origin and is maintained by the Linux Foundation. It is an attestation attached to every commit made by every developer. All contributions are covered by, and fall under, the DCO.
Trivial changes, such as spelling fixes, do not need to be signed.
This is covered by the DCO. Contributions are covered by the DCO and do not require a CLA.
It's simpler, clearer, and still protects users of Vector. We believe the DCO more accurately embodies the principles of open-source. More info can be found here:
Nope! The DCO confirms that you are entitled to submit the code, which assumes that you are authorized to do so. It treats you like an adult and relies on your accurate statement about your rights to submit a contribution.
No problem! We made this simple with the signoff
Makefile target:
make signoff
If you prefer to do this manually:
git commit --amend --signoff
If you have questions about this document or the project as a whole, please contact us at [email protected].