[#7043] RUBYOPT versioning? — Caleb Tennis <caleb@...>
Matz, others:
[#7050] RDoc patches for BigDecimal in Ruby CVS — mathew <meta@...>
Now that 1.8.4 is out and the initial flurry of problem reports has died
[#7055] More on VC++ 2005 — Austin Ziegler <halostatue@...>
Okay. I've got Ruby compiling. I'm attempting to get everything in
Hi,
On 05/01/06, nobuyoshi nakada <nobuyoshi.nakada@ge.com> wrote:
On 06/01/06, Austin Ziegler <halostatue@gmail.com> wrote:
Hi,
On 09/01/06, nobuyoshi nakada <nobuyoshi.nakada@ge.com> wrote:
[#7057] 64-bit Solaris READ_DATA_PENDING Revisited — Steven Lumos <steven@...>
[#7078] CRC - a proof-of-concept Ruby compiler — Anders Hkersten <chucky@...>
Hello everyone,
[#7084] mathn: ugly warnings — hadmut@... (Hadmut Danisch)
Hi,
Hadmut Danisch wrote:
Daniel Berger wrote:
*Dean Wampler *<deanwampler gmail.com> writes:
On Fri, 13 Jan 2006, mathew wrote:
On Fri, 13 Jan 2006, Mathieu Bouchard wrote:
ara.t.howard@noaa.gov wrote:
On Fri, 13 Jan 2006, James Britt wrote:
Dean Wampler <deanwampler gmail.com> writes:
On Sat, 14 Jan 2006, mathew wrote:
[#7100] core dump with ruby 1.9.0 (2006-01-10) and bdb-0.5.8 — Tanaka Akira <akr@...17n.org>
I found following test script dumps core.
>>>>> "T" == Tanaka Akira <akr@m17n.org> writes:
In article <200601110905.k0B950Op001713@moulon.inra.fr>,
[#7109] Calling flock with block? — Bertram Scharpf <lists@...>
Hi,
On Thu, 12 Jan 2006, Bertram Scharpf wrote:
[#7129] YAML.load({[]=>""}.to_yaml) — Tanaka Akira <akr@...17n.org>
I found that current YAML doesn't round trip {[]=>""}.
Hi.
Hi.
In article <20060115202203.D3624CA0.ocean@m2.ccsnet.ne.jp>,
[#7162] FileUtils.mv does not unlink source file when moving over filesystem boundary — Pav Lucistnik <pav@...>
Hi,
On Mon, 16 Jan 2006, Pav Lucistnik wrote:
[#7178] Add XHTML 1.0 Output Support to Ruby CGI — Paul Duncan <pabs@...>
The attached patch against Ruby 1.8.4 adds XHTML 1.0 output support to
[#7186] Ruby 1.9 and FHS — "Kirill A. Shutemov" <k.shutemov@...>
Build and install system changes:
[#7195] trouble due ruby redefining posix function eaccess — noreply@...
Bugs item #3317, was opened at 2006-01-24 15:33
[#7197] SSL-enabled DRb fds on SSLError? — ctm@... (Clifford T. Matthews)
Howdy,
On Jan 24, 2006, at 12:46 PM, Clifford T. Matthews wrote:
Patch worked fine against HEAD.
[#7203] bcc32's memory manager bug — "H.Yamamoto" <ocean@...2.ccsnet.ne.jp>
Hi.
[#7211] Some troubles with an embedded ruby interpreter — Matt Mower <matt.mower@...>
Hi folks,
[#7216] String#scan loops forefever if scanned string is modified inside block. — noreply@...
Bugs item #3329, was opened at 2006-01-26 10:55
[#7226] Fwd: Re: Question about massive API changes — "Sean E. Russell" <ser@...>
Hello,
Sean E. Russell wrote:
>
On 1/28/06, Caleb Tennis <caleb@aei-tech.com> wrote:
On Saturday 28 January 2006 17:13, Wilson Bilkovich wrote:
Sean E. Russell wrote:
[#7249] PATCH: append option to sysread — Yohanes Santoso <ysantoso-rubycore@...>
[#7259] TCP/UDP server weird lags on 1.8.4 linux — "Bill Kelly" <billk@...>
Hi !
Re: Design contracts and refactoring (was Re: mathn: ugly warnings)
Sorry, but I think you don't understand the purpose of Test Driven Development and the role of tests in focusing the design and implementation of the code and in giving the "customer" unambiguous criteria for believing the code meets the requirements. comments below. On 1/12/06, mathew <meta@pobox.com> wrote: > *Dean Wampler *<deanwampler gmail.com> writes: > > Let me suggest an XP-style alternative; make thorough unit tests > > required and make sure they "document" - and test! - the design > > "contract". > > Unit tests are not an alternative. They are an additional requirement. > > > Some XP gurus, such as Robert Martin, have pointed out > > that the test suite is an executable form of the requirements, > > especially the "acceptance tests" the customer uses (or should use) to > > confirm that a delivery satisfies the requirements. > > > > Although unit tests constitute one set of requirements, they rarely > cover the full set. In a language like Ruby where code is often generic > by default, it is implausible to generate enough unit tests to cover > every eventuality. > > For example, consider a simple vector addition routine in a 3D library. > The unit tests might test its behavior with Float and Integer vectors, > since that's why it was written. However, there's no reason it shouldn't > be used with heterogeneous vectors, Complex vectors, or any of a dozen > other types. Indeed, there's no general way for the writer of the code > to predict what classes the code might be asked to manipulate, nor > should they try. (Checking using kind_of? and throwing an exception if > you don't recognize it is not The Ruby Way, right?) Of course you're not going to exercise every possible int, float, etc., but neither do mathematicians prove theorems that way, which would be impossible. They examine all possible "significant cases". For example, if I remember this correctly, consider proof by induction. Consider the proof that all 2**n-1 numbers are prime. You show that 2**2-1 is prime (3), assume it's true for 2**n-1, then prove that 2**(n+1)-1 is also prime. Similarly for any programmatic tests, "unit" or other. If you do your job properly, you determine all the unique cases, taking into account the machine representations of integers, floats, etc. and write a finite set of tests that demonstrate the expected behavior in the normal and "corner" cases. For ints, you worry about behavior around 0, 2**32, etc., etc. You reasonably assume that if it works for i=10, then it's unlikely that you need a test for i=11, for example. It's not trivial to do all this correctly, but it is a bounded problem at least. > However, it is of vital importance to document the intended supported > functionality, which is a superset of the functionality the unit tests > describe. If I am adopting a module for use, why would I care about "intended" functionality? I'm only going to consider the module for the functionality it "actually" supports and executable tests are my only proof of what that functionality really is. Of course, it's fine to provide a high level description of the intention, goals, etc. of the module, but the code itself is the final "statement" of what the module really does. > Suppose it is determined that the vector add routine is painfully slow. > An obvious solution is to refactor it to use CBLAS, which provides > highly optimized native code vector operations. However, to do that you > need to know whether the feature of supporting (say) Complex vectors or > BigDecimal vectors is intended or not. The unit tests won't tell you this. Sure they tell you this. If the tests don't cover these options, I have to assume they aren't supported. Again, I don't care about intentions; I care about what works through explicit demonstration. > You also need to know whether vectors with elements exceeding the > capacity of machine types are supported. It's quite likely your unit > tests don't cover the entire range of integers and floats, because you > don't have an infinite amount of time to run unit tests in. So, are the > omissions intentional, indicating a limit to the scope of the API? Or > are they just omissions? There's no way to tell. I covered this above. For a math package on a real computer, there are no real infinities so the testing challenge is bounded, if still difficult. Certainly the IEEE standards for floating point arithmetic are not designed to be untestable! You can and should write tests that cover behavior related to machine limitations. > > > One limitation of documentation is that it has no enforcement power, > > so you have to write tests anyway to test conformance. > > Unit tests have no enforcement power either, because you can just change > the test. Indeed, I've already had to do this once when it turned out > that the unit test was wrong. (In net/ftp.) This makes no sense. If someone writes bad tests, that's not the fault of the testing "paradigm". Tests, whether they are explicitly called "tests", shipping applications (all too often...) or something else, are the only "enforcement power" we have. Documents don't prove anything. > > The XP view is > > that you should eliminate the redundancy. > > Except it's not redundancy. > > Unit tests define a set of functionality that is required. Documentation > tells you the functionality that is supported, which is generally a > superset of the functionality required by the unit tests. As I said before, I would never trust something a document says that isn't backed up by proof using a test of some kind. A module that claims some functionality is supported that is a superset of what's covered by the tests is not a complete deliverable, IMHO. > People who write code using a library are interested in the *supported* > functionality, not the bare minimum *required* functionality. If it's not backed by tests, it's not "supported". > People who want to refactor code need to know the *supported* > functionality, not the bare minimum *required* functionality. > > Finally, a reductio ad absurdum to illustrate the problem in concrete > form. Give me a function and a set of unit tests, and I can implement a > hopelessly broken refactored version that happens to pass all those tests: > > if (arguments == unit-test-set-1) > return canned-value-1 > if (arguments == unit-test-set-2) > return canned-value-2 > else > return random-value > > Without documentation saying what the function was supposed to do, > there's no reason my version shouldn't be considered correct according > to XP dogma. After all, it passes all the unit tests, and it's likely to > be very fast. If your "unit-test-set-N" set covered all the required functionality of the module, then this implementation is in fact "sufficient"! If it's fast, so much the better. I recall from my old Fortran days that some trig. functions look just like this example. They are essentially table lookups with maybe some interpolation to get values between table elements. (Don't quote me on this ;) ) You've actually brought up the main advantage of TDD; it helps force the design the simplest, minimum necessary design to meet the complete requirements, thereby eliminating unnecessary complexity. > > And that's one of the major faults with XP dogma. In fact, I think this is one of its strengths: TDD pushes you to implement "what's the simplest thing that can possible work". Done properly, the test suite is a programmatic statement of the complete requirements supported by the module and they prove the support is really there. dean > mathew > > -- Dean Wampler http://www.aspectprogramming.com http://www.newaspects.com http://www.contract4j.org