Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

Commit

Permalink
Archive documentation wiki within the repo
Browse files Browse the repository at this point in the history
  • Loading branch information
Krinkle committed Oct 11, 2024
1 parent a7e7d6a commit c25a15b
Show file tree
Hide file tree
Showing 24 changed files with 599 additions and 33 deletions.
55 changes: 22 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,36 +1,37 @@
[![Build Status](https://github.com/jquery/testswarm/actions/workflows/CI.yaml/badge.svg?event=push)](https://github.com/jquery/testswarm/actions/workflows/CI.yaml)
[![Tested with QUnit](https://img.shields.io/badge/tested_with-qunit-9c3493.svg)](https://qunitjs.com/)
[![Tested with QUnit](https://qunitjs.com/testedwith.svg)](https://qunitjs.com/)

TestSwarm - Distributed Continuous Integration for JavaScript
TestSwarm
=================

TestSwarm provides distributed continuous integration testing for
JavaScript.
TestSwarm provides distributed continuous integration testing for JavaScript.

The main instance monitoring jQuery core and related projects runs at
[swarm.jquery.org](https://swarm.jquery.org/).
> **⚠️ Project status**
> TestSwarm remains used by jQuery Foundation projects such as jQuery and jQuery UI, but is no longer under active development. Critical issues may be patched, but new issues will not be addressed.
>
> We recommend reviewing these alternatives: [QTap](https://github.com/qunitjs/qtap), [Karma](https://karma-runner.github.io/), [Testem](https://github.com/testem/testem), [grunt-contrib-qunit](https://github.com/gruntjs/grunt-contrib-qunit), [browserstack-runner](https://github.com/browserstack/browserstack-runner/), [Airtap](https://github.com/airtap/airtap), [Intern](https://theintern.io/), [Web Test Runner](https://github.com/brandonaaron/web-test-runner-qunit).
## Project Status

TestSwarm is used in projects of the jQuery Foundation, but it isn't under active development anymore. Although critical issues may be patched in the future, most open issues will remain unaddressed.
## Documentation

Within the jQuery Foundation, we're experimenting with alternative projects, to eventually shut down our own instance of TestSwarm:
* [About TestSwarm](./docs/About.md) (Philosophy, Architecture, How is it different?)
* [API Guide](./docs/API.md)
* [Automation Guide](./docs/Automation.md)
* [How to: Submit jobs](./scripts/addjob/README.md)
* [Project history](./docs/History.md) (Screenshots)

- [airtap](https://github.com/airtap/airtap)
- [Karma](https://karma-runner.github.io/)
- [Intern](https://theintern.io/)
- [browserstack-runner](https://github.com/browserstack/browserstack-runner/)
**Further reading**:

We recommend reviewing those and other alternatives.
* [JavaScript Testing Does Not Scale](http://ejohn.org/blog/javascript-testing-does-not-scale/), John Resig, 2009.
* [TestSwarm Alpha Open!](https://johnresig.com/blog/test-swarm-alpha-open/), John Resig, 2009.
* [JSConf talk: TestSwarm](http://ejohn.org/blog/jsconf-talk-games-performance-testswarm/), John Resig, 2009.
* [Video: TestSwarm Walkthrough](http://www.vimeo.com/6281121), John Resig, 2009.

## Quick start

Clone the repo, `git clone --recursive git://github.com/jquery/testswarm.git`.
Clone the repo

## Bug tracker

Found a bug? Please report it using our [issue
tracker](https://github.com/jquery/testswarm/issues)!
```sh
git clone https://github.com/jquery/testswarm.git
```

## Installation

Expand Down Expand Up @@ -114,11 +115,6 @@ You're welcome to use the GitHub [issue tracker](https://github.com/jquery/tests

Some of us are also on Gitter at [jquery/dev](https://gitter.im/jquery/dev).

## Documentation

* [TestSwarm wiki](https://github.com/jquery/testswarm/wiki)
* [Submit jobs README](./scripts/addjob/README.md)

## Copyright and license

See [LICENSE.txt](./LICENSE.txt).
Expand All @@ -134,10 +130,3 @@ Releases will be numbered in the following format:
The `-alpha` suffix is used to indicate unreleased versions in development.

For more information on SemVer, please visit <https://semver.org/>.

## History

TestSwarm was originally created by [John Resig](https://johnresig.com/) as a
basic tool to support unit testing of the [jQuery JavaScript
library](https://jquery.com). It later become a [Mozilla Labs](https://labs.mozilla.com/) project,
and has since moved again to become a [jQuery Foundation](https://jquery.org/) project.
27 changes: 27 additions & 0 deletions docs/API.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
As of v1.0.0 there is an API.

It is reachable at `{swarmroot}/api.php`

## Formats

* [JSON](https://en.wikipedia.org/wiki/JSON) (**recommended**)
* [JSON-P](https://en.wikipedia.org/wiki/JSONP) (using `callback` parameter)
* Debug (see below)

### JSON

See also [http://json.org/](http://json.org/) for a list of decoders in various programming languages.

### JSON-P

Passes the JSON to a callback function (named in the `callback` url parameter). This enables use of the TestSwarm API cross-domain in the web browser.

**NB:** For security reasons and to avoid abuse, the API does not allow logged-in sessions via JSONP.

### Debug

The debug format shows an HTML page instead with information about the request and then response is shown in a "pretty" format for human reading using PHPs `var_dump` function. Together with the configuration options `php_error_reporting` and `db_log_queries` in the `[debug]` section of `testswarm.ini`, this is a very wide range debugging interface for API queries.

## Actions

A full list of actions can be found in the `./inc/actions` directory of your TestSwarm installation. See the `@actionXXX` comments above the `doAction` methods in each class for more information about how the action should be used and what the required parameters and request methods are.
79 changes: 79 additions & 0 deletions docs/About.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
**TestSwarm** provides distributed continuous integration testing for JavaScript. It was originally created by [John Resig](http://ejohn.org/) as a basic tool to support unit testing of the [jQuery JavaScript library](http://jquery.com). It has since become a [jQuery Foundation](https://jquery.org/) project and is currently maintained by [Timo Tijhof](https://timotijhof.net).

## Philosophy

<img align="right" src="./img/2013-home-open.png" width="450" alt="" title="Screenshot of a TestSwarm install"></a>

The primary goal of TestSwarm is to take the complicated, and time-consuming, process of running JavaScript test suites in multiple browsers and to grossly simplify it. It achieves this goal by providing all the tools necessary for creating a continuous integration workflow for your JavaScript project.

The ultimate result of TestSwarm are the project pages and job pages.<br clear="both">

Project page:

<img alt="TestSwarm Project page" src="./img/2013-project.png" width="793">

It shows source control commits (going vertically) by browser (going horizontally). 'Green' indicates the runs were are 100% passing, 'Red' indicates a failure, and 'Grey' means the runs are scheduled awaiting run

For more details on the individual jobs in a project, click the job title in the first column.

Job page:

<img alt="TestSwarm Job page" src="./img/2013-job.png" width="799">

This shows all individual runs of the job (going vertically) by browser. To view the run results of a completed run, click the "run results" icon inside the colored cell.

## Architecture

### Structure

From top to bottom, the structure is as follows
* A TestSwarm install has a number of authorized accounts. These are called "users" or "projects"
* Projects have a timeline consisting of jobs. An example of a job would for example be "`jquery:master #5ffa3ea7eea3fd9848fb5833e3306d390672a3e2`" in project "`jquery`".
* Jobs contain two mappings:
* Runs: Every run represents a module that needs to be tested for a job. For example "selectors" or "attributes" are example of runs in a typical jQuery core test job.
* Browsers: Every job has a set of browsers that it needs to validate against. TestSwarm will distribute all runs of a job to those browsers only.

![TestSwarm diagram](./img/ts-swarm.png)

The architecture is as follows:

* Clients join the swarm by connecting to the "run" page on the swarm server.
* The run page is used as the framework to run all tests inside. One user can open multiple clients - and even multiple clients within a single browser. For example you could open 5 tabs in Firefox 3 each with a view of the test runner and would have 5 clients connected.
* The test runner periodically pings the server, asking for the latest "run" for this browser that hasn't been run yet. If there is one, it executes it (inside an iframe) and then submits the results back to the server. If there are no new runs available, the test runner goes back to sleep and tries again later.
* Every job also has a "run max" property. More about that below.

<span id="result-correction"></span>
### Result correction.

An important aspect of TestSwarm is its ability to proactively correct bad results coming in from clients. As any web developer knows: Browsers are surprisingly unreliable (inconsistent results, browser bugs, network issues, etc.). Here are a few of the things that TestSwarm does to try and generate reliable results:

* If a client loses its internet connection or otherwise stops responding its dead results will be automatically cleaned up by the swarm.
* If a client is unable to communicate with the central server it'll repeatedly re-attempt to connect (even going so far as to reload the page in an attempt to clear up any browser-born issues).
* The client has a global timeout to watch for test suites that are uncommunicative.
* The client has the ability to watch for individual test timeouts, allowing for partial results to be submitted back to the server.
* Bad results submitted by clients (e.g. ones with errors, failures, or other timeouts) are automatically re-run in new clients in an attempt to arrive at a passing state (the number of times in which the test is re-run is determined by the submitter of the job).

All together these strategies help the swarm to be quite resilient to misbehaving browsers, flaky internet connections, or even poorly-written test suites.

For example if a job has a runmax of 3, and it fails the first time it will distribute it again (preferably to a different client with the same user agent, otherwise the same client will get it again later) until it either passes or hits the maximum of 3.

## How is TestSwarm Different From...

### Selenium

[Selenium](http://seleniumhq.org/) provides a quite-full stack of functionality. It has a test suite, a test driver, automated browser launching, and the ability to distribute test suites to many machines (using their grid functionality). There are a few important ways in which TestSwarm is different:
* TestSwarm is test suite agnostic. It isn't designed for any particular test runner and is capable of supporting any generic JavaScript test suite.
* TestSwarm is much more decentralized. Jobs can be submitted to TestSwarm without clients being connected - and will only be run once clients eventually connect.
* TestSwarm automatically corrects mis-behaving clients and malformed results.
* TestSwarm provides a full continuous integration experience (hooks in to source control and a full browser-by-commit view which is critical for determining the quality of commits).
* TestSwarm doesn't require any browser plugins or extensions to be installed - nor does it require that any software be installed on the client machines.

For a number of corporations Selenium may already suit your needs (especially if you already have a form of continuous integration set up).

### JSTestDriver and Other Browser Launchers

There are many other browser launching tools (such as [Watir](http://wtr.rubyforge.org/)) but all of them suffer from the same problems as above - and frequently with even less support for advanced features like continuous integration.

### Server-Side Test Running

A popular alternative to the process of launching browsers and running test suites is that of running tests in headless instances of browsers (or in browser simulations, like in Rhino). All of these suffer from a critical problem: At a fundamental level you are no longer running tests in an actual browser - and the results can no longer be guaranteed to be identical to an actual browser. Unfortunately nothing can truly replace the experience of running actual code in a real browser.
Loading

0 comments on commit c25a15b

Please sign in to comment.