Skip to content

Commit

Permalink
Fix some typos
Browse files Browse the repository at this point in the history
  • Loading branch information
dupondje committed Jan 15, 2020
1 parent 9711115 commit e9a2141
Show file tree
Hide file tree
Showing 11 changed files with 60 additions and 60 deletions.
30 changes: 15 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Monitors VMware vSphere stats using govmomi. Sinks metrics to one of many time series backends.

Written in go to achieve fast sampling rates and high throughput sink. Successfuly benchmarked against 3000 VM's, logging 150,000 metrics per minute to an ElasticSearch backend.
Written in go to achieve fast sampling rates and high throughput sink. Successfully benchmarked against 3000 VM's, logging 150,000 metrics per minute to an ElasticSearch backend.

Compatible backends:

Expand Down Expand Up @@ -52,12 +52,12 @@ You can select the extra data collected by using the "Properties" property:
* tags: reports the tags associated with the virtual machine
* numcpu: reports the number of virtual cpus the virtual machine has
* memorysizemb: reports the quantity of memory the virtual machine has
* disks: reports the logical disks capacity inside the virtua machine
* disks: reports the logical disks capacity inside the virtual machine
* **all**: reports all the information

### vCenter parameters

vCenter parameters can be set in the configuration file or via environement variable.
vCenter parameters can be set in the configuration file or via environment variable.

The configuration file needs the username, password and hostname of the vCenter (from [sample config](./vsphere-graphite-example.json)):

Expand All @@ -68,7 +68,7 @@ The configuration file needs the username, password and hostname of the vCenter
]
```

If set via environement variable you can set multiple vcenters via ```VCENTER_*=<username>:<password>@<hostname>```
If set via environment variable you can set multiple vcenters via ```VCENTER_*=<username>:<password>@<hostname>```

To follow the example given in the sample file:
```
Expand All @@ -78,14 +78,14 @@ VCENTER_VC2=CONTOSO\\Administrator:$P@[email protected]

### Backend parameters

Backend parameters can be set in the config and will allways be overriden by environment variables.
This allows to use a generic config in a container image and set the backend by environement variables.
Backend parameters can be set in the config and will always be overridden by environment variables.
This allows to use a generic config in a container image and set the backend by environment variables.

* Type (CONFIG_TYPE):

Type of backend to use.

Currently "graphite", "influxdb", "thinfluxdb" (embeded influx client), "elastic", "prometheus", "thinprometheus" (embeded prometheus) and "fluentd"
Currently "graphite", "influxdb", "thinfluxdb" (embeded influx client), "elastic", "prometheus", "thinprometheus" (embedded prometheus) and "fluentd"

* Hostname (CONFIG_HOSTNAME):

Expand All @@ -108,7 +108,7 @@ This allows to use a generic config in a container image and set the backend by
Only supported by "influx", "thininflux" and "elastic" backends.

>
> Prometheus suppport for this would require certificate management.
> Prometheus support for this would require certificate management.
>
* Username (CONFIG_USERNAME):
Expand Down Expand Up @@ -147,11 +147,11 @@ All builds are pushed to docker:

Default tags includes:

* commit for specific commit in the branch (usefull to run from tip)
* commit for specific commit in the branch (useful to run from tip)
* latest for latest release
* specific realease tag or version
* specific release tag or version

The JSON configration file can be passed by mounting to /etc. Edit the configuration file and set it in the place you like here $(pwd)
The JSON configuration file can be passed by mounting to /etc. Edit the configuration file and set it in the place you like here $(pwd)

> docker run -t -v $(pwd)/vsphere-graphite.json:/etc/vsphere-graphite.json cblomart/vsphere-graphite:latest
Expand All @@ -163,8 +163,8 @@ A sample [docker compose file](./compose/vsphere-graphite-graphite-test.yml) is
this sample will start:

* vcsim ([vCenter simulator by govmomi](https://github.com/vmware/govmomi/tree/master/vcsim))
* graphite (["Offical" Graphite docker image](https://hub.docker.com/r/graphiteapp/graphite-statsd/)) the port 80 will be published to access the web interface.
* vsphere-graphite with the necessary environement parameters to address the started backend and vcenter
* graphite (["Official" Graphite docker image](https://hub.docker.com/r/graphiteapp/graphite-statsd/)) the port 80 will be published to access the web interface.
* vsphere-graphite with the necessary environment parameters to address the started backend and vcenter

To start this with swarm:

Expand All @@ -180,7 +180,7 @@ To start this with swarm:
## Execute vsphere-graphite in shell
Heavilly based on [govmomi](https://github.com/vmware/govmomi) but also on [daemon](github.com/takama/daemon) which provides simple daemon/service integration.
Heavily based on [govmomi](https://github.com/vmware/govmomi) but also on [daemon](github.com/takama/daemon) which provides simple daemon/service integration.
### Install golang
Expand Down Expand Up @@ -238,7 +238,7 @@ So don't hesitate and tell us what doesn't work or what you miss.
## Donations
This project is largely alive because of the forementioned contributors. Our time is precious bet it is even more precious to us when we can spend it on our beloved projects. So don't hesitate to make a donation (see badge)
This project is largely alive because of the aforementioned contributors. Our time is precious bet it is even more precious to us when we can spend it on our beloved projects. So don't hesitate to make a donation (see badge)
## License
Expand Down
4 changes: 2 additions & 2 deletions backend/backend.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ func (backend *Config) Init() (*chan Channels, error) {
case ThinInfluxDB:
//Initialize thin Influx DB client
log.Printf("backend %s: initializing\n", backendType)
thininfluxclt, err := thininfluxclient.NewThinInlfuxClient(backend.Hostname, backend.Port, backend.Database, backend.Username, backend.Password, "s", backend.Encrypted)
thininfluxclt, err := thininfluxclient.NewThinInfluxClient(backend.Hostname, backend.Port, backend.Database, backend.Username, backend.Password, "s", backend.Encrypted)
if err != nil {
log.Printf("backend %s: error creating client - %s\n", backendType, err)
return queries, err
Expand Down Expand Up @@ -364,7 +364,7 @@ func (backend *Config) SendMetrics(metrics []*Point, cleanup bool) {
/** only flush index at the end of cycle - see Clean function
_, err = backend.elastic.Flush().Index(elasticindex).Do(context.Background())
if err != nil {
log.Printf("backend %s: errr flushing data - %s\n", backendType, err)
log.Printf("backend %s: err flushing data - %s\n", backendType, err)
return
}
log.Printf("backend %s: elastic indexing flushed", backendType)
Expand Down
2 changes: 1 addition & 1 deletion backend/point.go
Original file line number Diff line number Diff line change
Expand Up @@ -130,7 +130,7 @@ func (point *Point) GetInfluxPoint(noarray bool, valuefield string) *InfluxPoint
return &ip
}

// ToInflux serialises the data to be consumed by influx line protocol
// ToInflux serializes the data to be consumed by influx line protocol
// see https://docs.influxdata.com/influxdb/v1.2/write_protocols/line_protocol_tutorial/
func (point *Point) ToInflux(noarray bool, valuefield string) string {
return point.GetInfluxPoint(noarray, valuefield).ToInflux(noarray, valuefield)
Expand Down
2 changes: 1 addition & 1 deletion backend/prometheus.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ func (backend *Config) Collect(ch chan<- prometheus.Metric) {
return
}

// points recieved
// points received
points := 0
// handle timeout between point reception
rectimer := time.NewTimer(100 * time.Millisecond)
Expand Down
6 changes: 3 additions & 3 deletions backend/thininfluxclient/thininfluxclient.go
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ type ThinInfluxClient struct {
password string
}

// NewThinInlfuxClient creates a new thin influx client
func NewThinInlfuxClient(server string, port int, database, username, password, precision string, ssl bool) (ThinInfluxClient, error) {
// NewThinInfluxClient creates a new thin influx client
func NewThinInfluxClient(server string, port int, database, username, password, precision string, ssl bool) (ThinInfluxClient, error) {
// config checks
errormessage := ""
if len(server) == 0 {
Expand All @@ -54,7 +54,7 @@ func NewThinInlfuxClient(server string, port int, database, username, password,
}
}
if !found {
errormessage = "Precision '" + precision + "' not in suppoted presisions " + strings.Join(precisions, ",")
errormessage = "Precision '" + precision + "' not in supported presisions " + strings.Join(precisions, ",")
}
fullurl := "http"
if ssl {
Expand Down
10 changes: 5 additions & 5 deletions backend/thinprometheusclient.go
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ func NewThinPrometheusClient(server string, port int) (ThinPrometheusClient, err
return ThinPrometheusClient{Hostname: server, Port: port, address: address}, nil
}

// ListenAndServe will start the listen thead for metric requests
// ListenAndServe will start the listen thread for metric requests
func (client *ThinPrometheusClient) ListenAndServe() error {
log.Printf("thinprom: start listening for metric request at %s\n", client.address)
return fasthttp.ListenAndServe(client.address, fasthttp.CompressHandlerLevel(requestHandler, 9))
Expand All @@ -58,7 +58,7 @@ func requestHandler(ctx *fasthttp.RequestCtx) {
// create a buffer to organise metrics per type
buffer := map[string][]string{}
log.Println("thinprom: sending query request")
// start the queriess
// start the queries
select {
case *queries <- channels:
log.Println("thinprom: sent query Request")
Expand All @@ -70,9 +70,9 @@ func requestHandler(ctx *fasthttp.RequestCtx) {
timeout := time.NewTimer(100 * time.Millisecond)
// collected points
points := 0
// recieve done
// receive done
recdone := false
log.Println("Tthinprom: waiting for query results")
log.Println("thinprom: waiting for query results")
L:
for {
select {
Expand All @@ -85,7 +85,7 @@ L:
}
}
timeout.Reset(100 * time.Millisecond)
// increased recieved points
// increased received points
points++
// add point to the buffer
addToThinPrometheusBuffer(buffer, &point)
Expand Down
4 changes: 2 additions & 2 deletions config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import (
"github.com/cblomart/vsphere-graphite/vsphere"
)

// Configuration : configurarion base
// Configuration : configuration base
type Configuration struct {
VCenters []*vsphere.VCenter
Metrics []*vsphere.Metric
Expand All @@ -19,7 +19,7 @@ type Configuration struct {
ReplacePoint bool
// VCenterResultLimit is the maximum amount of results to fetch back in one query
VCenterResultLimit int
// VCenterInstanceRatio is the number of effective result in fonction of the metrics.
// VCenterInstanceRatio is the number of effective result in function of the metrics.
// This is necessary due to the possibility to retrieve instances with wildcards
VCenterInstanceRatio float64
}
8 changes: 4 additions & 4 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,25 @@
# vsphere-graphite

Altought it started out as a simple tool to pump vsphere statistics to graphite it was extended to support different backends:
Although it started out as a simple tool to pump vsphere statistics to graphite it was extended to support different backends:

* [prometheus](https://prometheus.io/)
* [influxdb](https://www.influxdata.com/)
* [fluentd](https://www.fluentd.org/)
* [elasticsearch](https://www.elastic.co/)
* [graphite](https://graphiteapp.org/)

It will basically get the requested statistics for each virtual machine or host and send them to the backend every minuts.
It will basically get the requested statistics for each virtual machine or host and send them to the backend every minute.

The configuration allows to specify per object types which statistics should be fetched.
By default cpu count and memory count and, when vmware tools are running, disk informations will be reported.
By default cpu count and memory count and, when vmware tools are running, disk information will be reported.

For the backends that supports it, extra metadata can be fetched:

* Datastores
* Networks
* Cluster
* ResourcePool
* Folderp
* Folder

Sample reports:

Expand Down
12 changes: 6 additions & 6 deletions vsphere-graphite.go
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ func (service *Service) Manage() (string, error) {
defer pprof.StopCPUProfile()
}

//force backend values to environement varialbles if present
//force backend values to environment variables if present
s := reflect.ValueOf(conf.Backend).Elem()
numfields := s.NumField()
for i := 0; i < numfields; i++ {
Expand Down Expand Up @@ -255,7 +255,7 @@ func (service *Service) Manage() (string, error) {
// We must use a buffered channel or risk missing the signal
// if we're not ready to receive when the signal is sent.
interrupt := make(chan os.Signal, 1)
//lint:ignore SA1016 in this case we wan't to quit
//lint:ignore SA1016 in this case we want to quit
signal.Notify(interrupt, os.Interrupt, os.Kill, syscall.SIGTERM) // nolint: megacheck

// Set up a channel to receive the metrics
Expand All @@ -264,7 +264,7 @@ func (service *Service) Manage() (string, error) {
ticker := time.NewTicker(time.Second * time.Duration(conf.Interval))
defer ticker.Stop()

// Set up a ticker to collect metrics at givent interval (except for non scheduled backend)
// Set up a ticker to collect metrics at given interval (except for non scheduled backend)
if !conf.Backend.Scheduled() {
ticker.Stop()
} else {
Expand All @@ -275,7 +275,7 @@ func (service *Service) Manage() (string, error) {
}
}

// Memory statisctics
// Memory statistics
var memstats runtime.MemStats
// timer to execute memory collection
memtimer := time.NewTimer(time.Second * time.Duration(10))
Expand All @@ -292,7 +292,7 @@ func (service *Service) Manage() (string, error) {
for {
select {
case value := <-metrics:
// reset timer as a point has been recieved.
// reset timer as a point has been received.
// do that in the main thread to avoid collisions
if !memtimer.Stop() {
select {
Expand Down Expand Up @@ -353,7 +353,7 @@ func (service *Service) Manage() (string, error) {
log.Printf("memory usage: sys=%s alloc=%s\n", bytefmt.ByteSize(memstats.Sys), bytefmt.ByteSize(memstats.Alloc))
log.Printf("go routines: %d", runtime.NumGoroutine())
if conf.MEMProfiling {
f, err := os.OpenFile("/tmp/vsphere-graphite-mem.pb.gz", os.O_RDWR|os.O_CREATE, 0600) // nolin.vetshaddow
f, err := os.OpenFile("/tmp/vsphere-graphite-mem.pb.gz", os.O_RDWR|os.O_CREATE, 0600) // nolint.vetshaddow
if err != nil {
log.Fatal("could not create Mem profile: ", err)
}
Expand Down
18 changes: 9 additions & 9 deletions vsphere/cache.go
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import (
"github.com/vmware/govmomi/vim25/types"
)

// Cache will hold some informations in memory
// Cache will hold some information's in memory
type Cache map[string]interface{}

var lock = sync.RWMutex{}
Expand Down Expand Up @@ -158,8 +158,8 @@ func (c *Cache) GetDiskInfos(vcenter, section, i string) *[]types.GuestDiskInfo
return nil
}

// GetVirtualMahineConnectionState gets a virtual machine connection state from cache
func (c *Cache) GetVirtualMahineConnectionState(vcenter, section, i string) *types.VirtualMachineConnectionState {
// GetVirtualMachineConnectionState gets a virtual machine connection state from cache
func (c *Cache) GetVirtualMachineConnectionState(vcenter, section, i string) *types.VirtualMachineConnectionState {
if v, ok := c.get(vcenter, section, i).(*types.VirtualMachineConnectionState); ok {
return v
}
Expand All @@ -174,8 +174,8 @@ func (c *Cache) GetHostSystemConnectionState(vcenter, section, i string) *types.
return nil
}

// GetVirtualMahinePowerState gets a virtual machine power state from cache
func (c *Cache) GetVirtualMahinePowerState(vcenter, section, i string) *types.VirtualMachinePowerState {
// GetVirtualMachinePowerState gets a virtual machine power state from cache
func (c *Cache) GetVirtualMachinePowerState(vcenter, section, i string) *types.VirtualMachinePowerState {
if v, ok := c.get(vcenter, section, i).(*types.VirtualMachinePowerState); ok {
return v
}
Expand All @@ -194,7 +194,7 @@ func (c *Cache) GetHostSystemPowerState(vcenter, section, i string) *types.HostS
func (c *Cache) GetConnectionState(vcenter, section, i string) *string {
value := ""
if strings.HasPrefix(i, "vm-") {
state := c.GetVirtualMahineConnectionState(vcenter, section, i)
state := c.GetVirtualMachineConnectionState(vcenter, section, i)
if state == nil {
return nil
}
Expand All @@ -215,7 +215,7 @@ func (c *Cache) GetConnectionState(vcenter, section, i string) *string {
func (c *Cache) GetPowerState(vcenter, section, i string) *string {
value := ""
if strings.HasPrefix(i, "vm-") {
state := c.GetVirtualMahinePowerState(vcenter, section, i)
state := c.GetVirtualMachinePowerState(vcenter, section, i)
if state == nil {
return nil
}
Expand All @@ -232,7 +232,7 @@ func (c *Cache) GetPowerState(vcenter, section, i string) *string {
return nil
}

// Clean cache of unknows references
// Clean cache of unknown references
func (c *Cache) Clean(vcenter string, section string, refs []string) {
lock.Lock()
defer lock.Unlock()
Expand Down Expand Up @@ -434,7 +434,7 @@ func (c *Cache) FindNames(vcenter, section, moref string) []string {
return names
}

// FindTags finds objects in cachee and create a tag array
// FindTags finds objects in cache and create a tag array
func (c *Cache) FindTags(vcenter, moref string) []string {
tags := []string{}
ptr := cache.GetTags(vcenter, "tags", moref)
Expand Down
Loading

0 comments on commit e9a2141

Please sign in to comment.