[#7055] More on VC++ 2005 — Austin Ziegler <halostatue@...>

Okay. I've got Ruby compiling. I'm attempting to get everything in

17 messages 2006/01/05
[#7058] Re: More on VC++ 2005 — nobuyoshi nakada <nobuyoshi.nakada@...> 2006/01/06

Hi,

[#7084] mathn: ugly warnings — hadmut@... (Hadmut Danisch)

Hi,

22 messages 2006/01/10
[#7097] Re: mathn: ugly warnings — Daniel Berger <Daniel.Berger@...> 2006/01/10

Hadmut Danisch wrote:

[#7098] Design contracts and refactoring (was Re: mathn: ugly warnings) — mathew <meta@...> 2006/01/10

Daniel Berger wrote:

[#7118] Re: Design contracts and refactoring (was Re: mathn: ugly warnings) — mathew <meta@...> 2006/01/12

*Dean Wampler *<deanwampler gmail.com> writes:

[#7226] Fwd: Re: Question about massive API changes — "Sean E. Russell" <ser@...>

Hello,

23 messages 2006/01/28
[#7228] Re: Question about massive API changes — Caleb Tennis <caleb@...> 2006/01/28

>

Re: Design contracts and refactoring (was Re: mathn: ugly warnings)

From: Dean Wampler <deanwampler@...>
Date: 2006-01-12 17:21:43 UTC
List: ruby-core #7119
Sorry, but I think you don't understand the purpose of Test Driven
Development and the role of tests in focusing the design and
implementation of the code and in giving the "customer" unambiguous
criteria for believing the code meets the requirements.

comments below.

On 1/12/06, mathew <meta@pobox.com> wrote:
> *Dean Wampler *<deanwampler gmail.com> writes:
> > Let me suggest an XP-style alternative; make thorough unit tests
> > required and make sure they "document" - and test! - the design
> > "contract".
>
> Unit tests are not an alternative. They are an additional requirement.
>
> >  Some XP gurus, such as Robert Martin, have pointed out
> > that the test suite is an executable form of the requirements,
> > especially the "acceptance tests" the customer uses (or should use) to
> > confirm that a delivery satisfies the requirements.
> >
>
> Although unit tests constitute one set of requirements, they rarely
> cover the full set. In a language like Ruby where code is often generic
> by default, it is implausible to generate enough unit tests to cover
> every eventuality.
>
> For example, consider a simple vector addition routine in a 3D library.
> The unit tests might test its behavior with Float and Integer vectors,
> since that's why it was written. However, there's no reason it shouldn't
> be used with heterogeneous vectors, Complex vectors, or any of a dozen
> other types. Indeed, there's no general way for the writer of the code
> to predict what classes the code might be asked to manipulate, nor
> should they try. (Checking using kind_of? and throwing an exception if
> you don't recognize it is not The Ruby Way, right?)

Of course you're not going to exercise every possible int, float,
etc., but neither do mathematicians prove theorems that way, which
would be impossible. They examine all possible "significant cases".
For example, if I remember this correctly, consider proof by
induction. Consider the proof that all 2**n-1 numbers are prime. You
show that 2**2-1 is prime (3), assume it's true for 2**n-1, then prove
that 2**(n+1)-1 is also prime.

Similarly for any programmatic tests, "unit" or other. If you do your
job properly, you determine all the unique cases, taking into account
the machine representations of integers, floats, etc. and write a
finite set of tests that demonstrate the expected behavior in the
normal and "corner" cases. For ints, you worry about behavior around
0, 2**32, etc., etc. You reasonably assume that if it works for i=10,
then it's unlikely that you need a test for i=11, for example. It's
not trivial to do all this correctly, but it is a bounded problem at
least.

> However, it is of vital importance to document the intended supported
> functionality, which is a superset of the functionality the unit tests
> describe.

If I am adopting a module for use, why would I care about "intended"
functionality? I'm only going to consider the module for the
functionality it "actually" supports and executable tests are my only
proof of what that functionality really is.

Of course, it's fine to provide a high level description of the
intention, goals, etc. of the module, but the code itself is the final
"statement" of what the module really does.

> Suppose it is determined that the vector add routine is painfully slow.
> An obvious solution is to refactor it to use CBLAS, which provides
> highly optimized native code vector operations. However, to do that you
> need to know whether the feature of supporting (say) Complex vectors or
> BigDecimal vectors is intended or not. The unit tests won't tell you this.

Sure they tell you this. If the tests don't cover these options, I
have to assume they aren't supported. Again, I don't care about
intentions; I care about what works through explicit demonstration.

> You also need to know whether vectors with elements exceeding the
> capacity of machine types are supported. It's quite likely your unit
> tests don't cover the entire range of integers and floats, because you
> don't have an infinite amount of time to run unit tests in. So, are the
> omissions intentional, indicating a limit to the scope of the API? Or
> are they just omissions? There's no way to tell.

I covered this above. For a math package on a real computer, there are
no real infinities so the testing challenge is bounded, if still
difficult. Certainly the IEEE standards for floating point arithmetic
are not designed to be untestable! You can and should write tests that
cover behavior related to machine limitations.

>
> > One limitation of documentation is that it has no enforcement power,
> > so you have to write tests anyway to test conformance.
>
> Unit tests have no enforcement power either, because you can just change
> the test. Indeed, I've already had to do this once when it turned out
> that the unit test was wrong. (In net/ftp.)

This makes no sense. If someone writes bad tests, that's not the fault
of the testing "paradigm". Tests, whether they are explicitly called
"tests", shipping applications (all too often...) or something else,
are the only "enforcement power" we have. Documents don't prove
anything.

> >  The XP view is
> > that you should eliminate the redundancy.
>
> Except it's not redundancy.
>
> Unit tests define a set of functionality that is required. Documentation
> tells you the functionality that is supported, which is generally a
> superset of the functionality required by the unit tests.

As I said before, I would never trust something a document says that
isn't backed up by proof using a test of some kind.  A module that
claims some functionality is supported that is a superset of what's
covered by the tests is not a complete deliverable, IMHO.

> People who write code using a library are interested in the *supported*
> functionality, not the bare minimum *required* functionality.

If it's not backed by tests, it's not "supported".

> People who want to refactor code need to know the *supported*
> functionality, not the bare minimum *required* functionality.
>
> Finally, a reductio ad absurdum to illustrate the problem in concrete
> form. Give me a function and a set of unit tests, and I can implement a
> hopelessly broken refactored version that happens to pass all those tests:
>
> if (arguments == unit-test-set-1)
>   return canned-value-1
> if (arguments == unit-test-set-2)
>   return canned-value-2
> else
>   return random-value
>
> Without documentation saying what the function was supposed to do,
> there's no reason my version shouldn't be considered correct according
> to XP dogma. After all, it passes all the unit tests, and it's likely to
> be very fast.

If your "unit-test-set-N" set covered all the required functionality
of the module, then this implementation is in fact "sufficient"! If
it's fast, so much the better. I recall from my old Fortran days that
some trig. functions look just like this example. They are essentially
table lookups with maybe some interpolation to get values between
table elements. (Don't quote me on this ;) )  You've actually brought
up the main advantage of TDD; it helps force the design the simplest,
minimum necessary design to meet the complete requirements, thereby
eliminating unnecessary complexity.

>
> And that's one of the major faults with XP dogma.

In fact, I think this is one of its strengths: TDD pushes you to
implement "what's the simplest thing that can possible work". Done
properly, the test suite is a programmatic statement of the complete
requirements supported by the module and they prove the support is
really there.

dean

> mathew
>
>


--
Dean Wampler
http://www.aspectprogramming.com
http://www.newaspects.com
http://www.contract4j.org


In This Thread