head	1.7;
access;
symbols;
locks; strict;
comment	@# @;


1.7
date	2003.01.30.22.14.57;	author kiwi;	state Exp;
branches;
next	1.6;

1.6
date	2002.12.16.16.30.27;	author kiwi;	state Exp;
branches;
next	1.5;

1.5
date	2002.12.16.16.19.59;	author kiwi;	state Exp;
branches;
next	1.4;

1.4
date	2002.12.16.16.10.11;	author kiwi;	state Exp;
branches;
next	1.3;

1.3
date	2002.12.16.16.08.33;	author kiwi;	state Exp;
branches;
next	1.2;

1.2
date	2001.01.01.09.46.02;	author linto;	state Exp;
branches;
next	1.1;

1.1
date	2000.12.31.08.57.44;	author linto;	state Exp;
branches;
next	;


desc
@@


1.7
log
@Test (ignore this)
@
text
@$Id: README,v 1.6 2002/12/16 16:30:27 kiwi Exp $ -*- outline -*-

This file contains why, how, and what tools you need to run the test suite.

* Why?
The question is probably why automated tests or why have you done
this?  
** Advocacy for doing tests
Software easily gets extremely complex.  Especially software as
Roxen/Caudium that is extremely configurable i.e. the user of the
software is allowed to remove and add modules and connect them
together in ways the original program developer has perhaps not
foreseen.

As the complexity grows the amount of time it takes to verify that the
software does not contain any errors grows with it.  As the testing
time increases it becomes more and more difficult to guarantee that
the tests are correct and the uncertainty of whether or not it is the
tests, the tester, or the original software that is in error grows
undermining the integrity of the test.
** Automated tests
Automated tests takes the tedious part of the testing and converts
them into programs.  It uses the computer for what it is really good
at i.e. repeating the same thing over and over.

The upside is that once you have defined what is correct behaviour and
what is not by your software and implemented a test that verifies that
behaviour you will never have to think about that anymore.  The test
will always be there to verify that you have not changed the correct
behaviour.

One downside of this is that this is impossible if you cannot define
correct behaviour or cumbersome if you need to spend a lot of time
defining or arguing what is correct behaviour and what is not.
Especially is this complicated when GUIs and usability issues are
involved because a computer program cannot (easily) decide whether or
not a certain layout, font, colouring "does the job correctly".

Another downside is that it takes a lot of time to develop the tests.
My guess would be that it takes at least twice as much time to develop
the tests as it takes to write down the same tests, run them manually
once, and document the result.

On the other hand, if you are planning on doing several releases or if
you find manual testing tedious and developing test cases an
intellectual challenge (as I do) the choice is simple.  
** Tests for Caudium
Caudium is a web server that is configured using http/html.  That
makes it extremely well suited for automated tests since you only need
to develop one way of communicating with the tested object.  All you
need to do is to define a set of configuration implemented by fetching
pages and filling forms and then a set of pages with a set of result
that they shall result in.

* How?
This test suite uses the DejaGnu framework.  This means that test
cases are written in expect.  Expect is in turn written in tcl and tcl
has an http client side protocol implementation that the tests is
built upon.

Each test case installs the server, starts the server and afterwards
cleans up by shutting the server down and removing the configuration
and log files.  Actually when it comes to the installation, a shortcut
is taken meaning that it is only the last part of the installation
that is done i.e. the "./install" script.  The tests expect the "make
install" part to be in place and an assumption is made that there is
no state saved outside the configuration and logs directories.

* What tools do you need to run it?
The DejaGnu tool needs to be installed on your host to run the
tests.  Since the test cases use the http protocol implementation from
tcl 8.0 the version of expect and tcl are important.

The autoconf tool is also needed to run the test suite from the cvs
but since you have already compiled and installed Caudium I suspect it
is in place already.  If ever the test suite will be made into a
distribution this is only needed when building the distribution, not
when unpacking and running the tests.

* Vision for the future!
The vision for the future is to have automated tests that thouroughly
tests all aspects of all uses of the software.

This will probably never be achieved.  It is a vision!

The short term goal is to get tests in place to test all modules, both
c-modules and pike-modules, included in the Caudium distribution.
This forces us to build or find all tools and structure needed to
implement the automated tests.

The long term goal is to verify that all lines of code are run through
by the complete set of tests, meaning both the c-lines in the
c-modules and the pike-lines.  This forces us to build or find tools
to verify what lines are run through and what are not and then reach
different levels of coverage (30%, 50%, 80%, 90%, 95%, 99%,...).


@


1.6
log
@Verbose mode on
@
text
@d1 1
a1 1
$Id: README,v 1.5 2002/12/16 16:19:59 kiwi Exp $ -*- outline -*-
d96 1
@


1.5
log
@Yet another test.
@
text
@d1 1
a1 1
$Id: README,v 1.4 2002/12/16 16:10:11 kiwi Exp $ -*- outline -*-
a95 1

@


1.4
log
@Again.
@
text
@d1 1
a1 1
$Id: README,v 1.3 2002/12/16 16:08:33 kiwi Exp $ -*- outline -*-
d96 1
@


1.3
log
@A test... please ignore
@
text
@d1 1
a1 1
$Id: README,v 1.2 2001/01/01 09:46:02 linto Exp $ -*- outline -*-
a95 1

@


1.2
log
@Updated.
@
text
@d1 1
a1 1
$Id: README,v 1.1 2000/12/31 08:57:44 linto Exp $ -*- outline -*-
d96 2
@


1.1
log
@Initial version
@
text
@d1 1
a1 1
$Id$ -*- outline -*-
d9 1
a9 1
Software easily gets extremely complex. Especially software as
d19 1
a19 1
tests, the tester or the original software that is in error grows
d23 2
a24 2
them into scripts or programs if you will.  It uses the computer for
what it is really good at i.e. repeating the same thing over and over.
d33 5
a37 5
correct behaviour or need to spend a lot of time defining or arguing
what is correct behaviour and what is not.  Especially is this
complicated when GUIs and usability issues are involved because a
computer program cannot (easily) decide whether or not a certain
layout, font, colouring "does the job correctly".
d40 3
a42 2
My guess would be that it takes at least ten times as much time to
develop the tests as it takes to run the same test manually once.
d51 3
a53 3
need to do is to for each test is to define a set of configuration
implemented by fetching pages and filling forms and then a set of
pages with a set of result that they shall result in.
d57 1
a57 1
cases are written in expect. Expect is in turn written in tcl and tcl
d61 5
a65 5
Each test case installs the server, starts the server and then cleans
up by shutting the server down and removing the configuration and log
files.  Actually when it comes to the installation, a shortcut is
taken meaning that it is only the last part of the installation that
is done i.e. the "./install" script.  The tests expect the "make
d75 2
a76 2
but since you have already compiled and installed Caudium I suspect
they are in place already.  If ever the test suite will be made into a
d81 15
a95 3
The question might be when are we done, when have we developed enough
tests?  The answer is: when all aspects of all uses of the software
are thouroughly tested.@

