Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

ocaml question

3 views
Skip to first unread message

Toby Kelsey

unread,
Nov 9, 2006, 5:42:46 AM11/9/06
to
I am just learning Ocaml, and have noticed that a polymorphic argument can only
be specialised in one way in a function body. For example:

Objective Caml version 3.08.3

# let twice f x = f (f x);;
val twice : ('a -> 'a) -> 'a -> 'a = <fun>
# List.hd (List.hd [[1]]);;
- : int = 1
# twice List.hd [[1]];;
This expression has type 'a list -> 'a but is here used with type
'a list -> 'a list

Is this an unavoidable limitation of the type system, or is there a work-around?

Toby

Simon Richard Clarkstone

unread,
Nov 9, 2006, 7:44:02 AM11/9/06
to

This is called the "monomorphism restriction", which should help you
search your help file(s). There may be a compiler flag to turn it off,
but I don't know what it might be. Of course, turning it off may
confuse the type-inferer elsewhere in the program.

--
Simon Richard Clarkstone: s.r.cl?rkst?n?@durham.ac.uk/s?m?n.cl?rkst?n?@
hotmail.com ### "I have a spelling chequer / it came with my PC /
it plainly marks for my revue / Mistake's I cannot sea" ...
by: John Brophy (at: http://www.cfwf.ca/farmj/fjjun96/)

Toby Kelsey

unread,
Nov 9, 2006, 9:01:28 AM11/9/06
to
Simon Richard Clarkstone wrote:
> This is called the "monomorphism restriction", which should help you
> search your help file(s). There may be a compiler flag to turn it off,
> but I don't know what it might be. Of course, turning it off may
> confuse the type-inferer elsewhere in the program.

Thanks Simon, I'll read up about it.

I also noticed that pure functions applied to equal values can give different
results, in particular for +-0.0. Doesn't this violate referential transparency
and mean the compiler might produce incorrect programs?

Are there any other sets of values which compare equal but differ (I know
polymorphic values compare equal to specialised ones (([] : int list) = [])?

Toby

Matthias Blume

unread,
Nov 9, 2006, 10:25:55 AM11/9/06
to
Simon Richard Clarkstone <s.r.cla...@durham.ac.uk> writes:

> Toby Kelsey wrote:
>> I am just learning Ocaml, and have noticed that a polymorphic
>> argument can only
>> be specialised in one way in a function body. For example:
>> Objective Caml version 3.08.3
>> # let twice f x = f (f x);;
>> val twice : ('a -> 'a) -> 'a -> 'a = <fun>
>> # List.hd (List.hd [[1]]);;
>> - : int = 1
>> # twice List.hd [[1]];;
>> This expression has type 'a list -> 'a but is here used with type
>> 'a list -> 'a list
>> Is this an unavoidable limitation of the type system, or is there a
>> work-around?
>
> This is called the "monomorphism restriction", which should help you
> search your help file(s). There may be a compiler flag to turn it
> off, but I don't know what it might be. Of course, turning it off may
> confuse the type-inferer elsewhere in the program.

No, this has nothing to do with any "monomorphism restriction" (which
is a Haskell-ism). What you probably mean is the so-called "value
restriction", but even if you meant that, it would be wrong.

The problem here is that ML does not have quantification at higher kinds.
In other words, when the compiler type-checks the sub-expression
List.hd, it needs to give it a single monomorphic type. Or, to say
the same thing differently, the two instances of f in the body of
twice will have to have the same (monomorphic) type.

For the program to type-check, twice would need to have a type
signature along the lines of:

twice : (\forall F :: *->* . \forall t : * . (t -> F(t)) -> (t -> F(F(t))))

In ML, you cannot quantify over type operators such as F. If you
could, you wouldn't be able to do sound and complete type inference.

Regards,
Matthias

Jon Harrop

unread,
Nov 9, 2006, 11:24:48 AM11/9/06
to
Toby Kelsey wrote:
> I also noticed that pure functions applied to equal values can give
> different
> results, in particular for +-0.0.

Yes, physical equality is a pure function and those values are not
physically equal:

# 0. == -0.;;
- : bool = false

> Doesn't this violate referential
> transparency and mean the compiler might produce incorrect programs?

I don't think so.

> Are there any other sets of values which compare equal but differ (I know
> polymorphic values compare equal to specialised ones (([] : int list) =
> [])?

You mean what values are structurally equal but physically different? No
others that I can think of.

--
Dr Jon D Harrop, Flying Frog Consultancy
Objective CAML for Scientists
http://www.ffconsultancy.com/products/ocaml_for_scientists

Ben Rudiak-Gould

unread,
Nov 9, 2006, 12:19:51 PM11/9/06
to
Matthias Blume wrote:
>Simon Richard Clarkstone <s.r.cla...@durham.ac.uk> writes:
>>This is called the "monomorphism restriction" [...]

>
>No, this has nothing to do with any "monomorphism restriction" (which
>is a Haskell-ism).

Well, it's not *the* monomorphism restriction, but it is *a* monomorphism
restriction. It's let-bound polymorphism, or to put it another way,
everything-but-let-bound monomorphism. Not that anything you wrote is
incorrect, but I think you were too harsh.

For the OP: the rule is simply that any variable that's bound by something
other than a let expression can only be used at one type in its scope. Since
f isn't bound by a let expression, the only way that twice can typecheck is
if f's return value has the same type as its argument, which rules out List.hd.

>The problem here is that ML does not have quantification at higher kinds.

I think it's worse than that, because even Haskell, which does have
quantification at higher kinds, can't give this function the type that it
deserves. You can make it work with List.head, or with List.tail, but not
with both at the same time. You'd need intersection types for that, or some
sort of runtime type aliasing.

-- Ben

M E Leypold

unread,
Nov 9, 2006, 12:45:53 PM11/9/06
to

Ben Rudiak-Gould <br276d...@cam.ac.uk> writes:

> Matthias Blume wrote:
> >Simon Richard Clarkstone <s.r.cla...@durham.ac.uk> writes:
> >>This is called the "monomorphism restriction" [...]
> >
> >No, this has nothing to do with any "monomorphism restriction" (which
> >is a Haskell-ism).
>
> Well, it's not *the* monomorphism restriction, but it is *a*
> monomorphism restriction. It's let-bound polymorphism, or to put it

Whatever it is called, if I understood the OPs problem right there is
workaround (for ocaml, he was using ocaml), which is addressed in

http://caml.inria.fr/resources/doc/faq/core.en.html#typing "How to
write a function with polymorphic arguments?"

AFAI understand that will work from ocaml 3.9 upwards.

Regards -- Markus

Toby Kelsey

unread,
Nov 9, 2006, 1:07:40 PM11/9/06
to
Jon Harrop wrote:
> Toby Kelsey wrote:
>> I also noticed that pure functions applied to equal values can give
>> different
>> results, in particular for +-0.0.
>
> Yes, physical equality is a pure function and those values are not
> physically equal:
>
> # 0. == -0.;;
> - : bool = false

That's true for all floating-point values:

# 1.0 == 1.0;;
- : bool = false

so can the compiler not eliminate common sub-expressions like (exp 1.0) ?

Perhaps it just does it for literal values, and never tries to memoize
floating-point arguments.

Toby

Toby Kelsey

unread,
Nov 9, 2006, 1:08:54 PM11/9/06
to
Jon Harrop wrote:
> Toby Kelsey wrote:
>> I also noticed that pure functions applied to equal values can give
>> different
>> results, in particular for +-0.0.
>
> Yes, physical equality is a pure function and those values are not
> physically equal:
>
> # 0. == -0.;;
> - : bool = false

That's true for all floating-point values:

# 1.0 == 1.0;;
- : bool = false

so can the compiler not eliminate common sub-expressions like (exp 1.0) ?

Perhaps it just does it for literal values, and never tries to memoize

floating-point expressions.

Toby

Message has been deleted

Jon Harrop

unread,
Nov 9, 2006, 2:39:47 PM11/9/06
to
Toby Kelsey wrote:
> That's true for all floating-point values:
>
> # 1.0 == 1.0;;
> - : bool = false

Bad example, I meant that the internal representations were different:

# Int64.bits_of_float 0.;;
- : int64 = 0L
# Int64.bits_of_float(-0.);;
- : int64 = -9223372036854775808L

> so can the compiler not eliminate common sub-expressions like (exp 1.0) ?

The compilers don't do CSE anyway, AFAIK.

Dirk Thierbach

unread,
Nov 10, 2006, 3:27:30 AM11/10/06
to
Toby Kelsey <toby_...@ntlworld.com> wrote:
> Jon Harrop wrote:
>> Toby Kelsey wrote:

>>> I also noticed that pure functions applied to equal values can
>>> give different results, in particular for +-0.0.

Any examples for this? It might be an artifact of the IEEE floating
point representation, where +-0.0 are not really "equal". And all the
IEEE floating point infinities don't play together nice with purity, as
well.

>> Yes, physical equality is a pure function

I am not so sure about that -- physical equality is somewhat of a hack,
because it depends on internal representation. You can have (structurally)
equal values residing at different memory locations, and that would violate
purity (which would be normally defined by structural equality).

Note that other languages (e.g. Haskell) don't even have physical
equality, because it just doesn't make sense in a purely functional
setting.

> That's true for all floating-point values:
>
> # 1.0 == 1.0;;
> - : bool = false

Which makes physical equality even more bogus :-)

- Dirk

Daniel C. Wang

unread,
Nov 10, 2006, 4:38:04 AM11/10/06
to Matthias Blume
Matthias Blume wrote:
{stuff deleted}

> The problem here is that ML does not have quantification at higher kinds.
> In other words, when the compiler type-checks the sub-expression
> List.hd, it needs to give it a single monomorphic type. Or, to say
> the same thing differently, the two instances of f in the body of
> twice will have to have the same (monomorphic) type.
>
> For the program to type-check, twice would need to have a type
> signature along the lines of:
>
> twice : (\forall F :: *->* . \forall t : * . (t -> F(t)) -> (t -> F(F(t))))
>
> In ML, you cannot quantify over type operators such as F. If you
> could, you wouldn't be able to do sound and complete type inference.
>
> Regards,
> Matthias

Oh yee of little faith.. (OCaml can do the same, but I don't remember
OCaml syntax..)

functor Twice( type 'a F
val f : 'a F -> 'a ) : sig
val app : 'a F F -> 'a
end
= struct
fun app x = f(f(x))
end

structure T = Twice(type 'a F = 'a list
val f = List.hd)
val x = T.app [[1]]

Of course the module system is a bit of a big hammer to bang on this nail.

MoscowML has first class modules, so this might be doable more nicely there.

Toby Kelsey

unread,
Nov 10, 2006, 5:02:10 AM11/10/06
to
Dirk Thierbach wrote:

>>> Toby Kelsey wrote:
>
>>>> I also noticed that pure functions applied to equal values can
>>>> give different results, in particular for +-0.0.
>
> Any examples for this? It might be an artifact of the IEEE floating
> point representation, where +-0.0 are not really "equal". And all the
> IEEE floating point infinities don't play together nice with purity, as
> well.

The only differences I found were string_of_float and 1.0/.x giving +-infinity.

It would be useful if division by zero could raise an exception, and also if
integer overflow could do so. Are there compiler/interpreter options for these?

Toby

Dirk Thierbach

unread,
Nov 10, 2006, 5:29:16 AM11/10/06
to
M E Leypold <development-2006-8...@andthatm-e-leypold.de> wrote:
> Whatever it is called, if I understood the OPs problem right there is
> workaround (for ocaml, he was using ocaml), which is addressed in
>
> http://caml.inria.fr/resources/doc/faq/core.en.html#typing "How to
> write a function with polymorphic arguments?"
>
> AFAI understand that will work from ocaml 3.9 upwards.

That's a workaround for higher order quantification, but it still doesn't
resolve the problems Matthias Blume mentioned.

And, BTW, the observation that it's difficult to give "twice" a
fitting type that is general enough is not new, I vaguely remember
there were several papers about that subject.

- Dirk

Andreas Rossberg

unread,
Nov 10, 2006, 7:20:49 AM11/10/06
to
Jon Harrop wrote:
>
> Yes, physical equality is a pure function

Physical equality is not at all a pure function, because it breaks
referential transparency:

let p = (1,1) in p == p
=/=
(1,1) == (1,1)

In fact, defeating referential transparency in examples like this is the
whole purpose of having physical equality.

- Andreas

Dirk Thierbach

unread,
Nov 10, 2006, 6:54:36 AM11/10/06
to
Daniel C. Wang <danw...@gmail.com> wrote:
> Oh yee of little faith..

One point of the argument was that even with higher order quantifiers,
the higher kind type variables are not sufficient, because you just
cannot put them in a proper position to cover all cases.

> functor Twice( type 'a F
> val f : 'a F -> 'a ) : sig
> val app : 'a F F -> 'a
> end
> = struct
> fun app x = f(f(x))
> end
>
> structure T = Twice(type 'a F = 'a list
> val f = List.hd)

And now try the same with List.tl. :-)

- Dirk

Dirk Thierbach

unread,
Nov 10, 2006, 7:01:13 AM11/10/06
to
Toby Kelsey <toby_...@ntlworld.com> wrote:

> The only differences I found were string_of_float and 1.0/.x giving
> +-infinity.

And that's a problem with the IEEE standard, which requires that (and
the FPU, which implements it). The standard just wasn't designed by
people who had referential transparency in mind.

> It would be useful if division by zero could raise an exception, and
> also if integer overflow could do so. Are there
> compiler/interpreter options for these?

None that I know of, but you can always just wrap the operators.

IIRC, the idea of the standard is to make it practical to go on with
your computation even if your floats become by accident so small that
you cannot destinguish them from zero (even though they are). Apparently
that's ok for numerical recipes, even if it's theoretically not very
sound. But better ask someone else about those intricacies :-)

- Dirk

Matthias Blume

unread,
Nov 10, 2006, 9:53:43 AM11/10/06
to

Now try instantiating Twice with f = fn x => [x].

I think even with the module system, you cannot really express the
"best" type of twice (or Twice, as it may be).

Matthias

Jon Harrop

unread,
Nov 10, 2006, 10:09:06 AM11/10/06
to
Andreas Rossberg wrote:
> Physical equality is not at all a pure function, because it breaks
> referential transparency:
>
> let p = (1,1) in p == p
> =/=
> (1,1) == (1,1)
>
> In fact, defeating referential transparency in examples like this is the
> whole purpose of having physical equality.

I thought pure just meant side-effect free?

Marcin 'Qrczak' Kowalczyk

unread,
Nov 10, 2006, 2:47:36 PM11/10/06
to
Dirk Thierbach <dthie...@usenet.arcornews.de> writes:

> And that's a problem with the IEEE standard, which requires that
> (and the FPU, which implements it). The standard just wasn't
> designed by people who had referential transparency in mind.

Another point of view is that arithmetic equality should be
distinguished from equivalence.

--
__("< Marcin Kowalczyk
\__/ qrc...@knm.org.pl
^^ http://qrnik.knm.org.pl/~qrczak/

Ben Rudiak-Gould

unread,
Nov 10, 2006, 4:24:28 PM11/10/06
to
Dirk Thierbach wrote:
>And that's a problem with the IEEE standard, which requires that (and
>the FPU, which implements it). The standard just wasn't designed by
>people who had referential transparency in mind.

I agree with Marcin; this isn't a violation of referential transparency.
Positive and negative zero are different values. The analogy with real
numbers is misleading, as is the IEEE definition of equality.

-- Ben

Matthias Blume

unread,
Nov 10, 2006, 4:51:59 PM11/10/06
to
Ben Rudiak-Gould <br276d...@cam.ac.uk> writes:

The real killer comes with NaNs: if x is a NaN, then not (x = x).
This is something that should never happen with any sane notion of
equality. That's one of the reasons why Standard ML has opted not to
give "equality type" status to type real.

Jon Harrop

unread,
Nov 10, 2006, 5:09:26 PM11/10/06
to

I found an interesting bug in OCaml recently, might come up in SML compilers
too:

# let f = function 0. -> `A | -0. -> `B | _ -> `C;;
val f : float -> [> `A | `B | `C ] = <fun>
# f 0.;;
- : [> `A | `B | `C ] = `A
# f(-0.);;
- : [> `A | `B | `C ] = `A

So the pattern match is using float equality to match -0. to the first case
(as -0. = 0.) but it isn't giving a redundancy warning on the function
itself.

Matthias Blume

unread,
Nov 10, 2006, 6:16:26 PM11/10/06
to
Jon Harrop <j...@ffconsultancy.com> writes:

This does not come up in SML since floating point constants cannot
appear within patterns.

Matthias

Daniel C. Wang

unread,
Nov 10, 2006, 10:24:13 PM11/10/06
to Dirk Thierbach

>
> Now try instantiating Twice with f = fn x => [x].
>
> I think even with the module system, you cannot really express the
> "best" type of twice (or Twice, as it may be).
>
> Matthias


(* A little exercise in category theory...
Note: A canonical mapF can be generated from the definition of 'a G.

*)


functor Twice( type 'a F

type 'a G
val mapF : ('a -> 'b) -> 'a G -> 'b G
val f : 'a F -> 'a G ) : sig
val app : 'a F F -> 'a G G
end
= struct
fun app x = mapF f (f x)
end

structure T1 = Twice(type 'a F = 'a list
type 'a G = 'a
fun mapF f x = (f x)
val f = List.hd)
val t1 = T1.app [[1]]

structure T2 = Twice(type 'a F = 'a list
type 'a G = 'a list
fun mapF f x = List.map f x
val f = List.tl)
val t2 = T2.app [[1,2],[3,4]]

structure T3 = Twice(type 'a F = 'a
type 'a G = 'a list
fun mapF f x = List.map f x
val f = (fn x => [x]))
val t3 = T3.app 1

functor Twice : <sig>
structure T1 : sig val app : 'a F F -> 'a G G end
val t1 = 1 : int
structure T2 : sig val app : 'a F F -> 'a G G end
val t2 = [[4]] : int ?.G ?.G
structure T3 : sig val app : 'a F F -> 'a G G end
val t3 = [[1]] : int ?.G ?.G
val it = () : unit

Dirk Thierbach

unread,
Nov 11, 2006, 2:31:00 AM11/11/06
to

That's what I meant: The equality in the IEEE definition is just not the
definition one can use for referential transperancy. You cannot
substitute equal things with equal things and have a guarantee that
the result is the same. (Ok, here the violation is not because of extra
"outside" state, but it's nevertheless a violation).

- Dirk

ross...@ps.uni-sb.de

unread,
Nov 11, 2006, 12:28:30 PM11/11/06
to
Jon Harrop wrote:
> Andreas Rossberg wrote:
> > Physical equality is not at all a pure function, because it breaks
> > referential transparency:
> >
> > let p = (1,1) in p == p
> > =/=
> > (1,1) == (1,1)
> >
> > In fact, defeating referential transparency in examples like this is the
> > whole purpose of having physical equality.
>
> I thought pure just meant side-effect free?

And more importantly, returns equal results for equal arguments. Of
course, "equal" can mean different things, but it at least has to
respect substitution of values.

So, unless you consider tuple expressions themselves impure (and all
other potentially boxed types), physical equality is not pure.

- Andreas

Joachim Durchholz

unread,
Nov 11, 2006, 1:04:16 PM11/11/06
to
Marcin 'Qrczak' Kowalczyk schrieb:

> Dirk Thierbach <dthie...@usenet.arcornews.de> writes:
>
>> And that's a problem with the IEEE standard, which requires that
>> (and the FPU, which implements it). The standard just wasn't
>> designed by people who had referential transparency in mind.
>
> Another point of view is that arithmetic equality should be
> distinguished from equivalence.

Actually equality is a rather vaguely defined concept in programs and in
ADTs, and it can change depending on the perspective you're taking.

E.g. if the representation is redundant when compared to the abstract
values, your notion of equality will change depending on whether you're
"above" or "below" the abstraction barrier (languages that don't allow
to redefine the equality operator usually have great trouble with this).

In the case of floating-point numbers, different algorithms call for
different kinds of equalities, and that's why IEEE distinguishes +0.0
and -0.0 (and various kinds of NaN values and infinities).
Just because two floats print the same, this doesn't mean they are the same.
Throw in round-off errors, and comparing for equality begins to not even
make sense for floats (unless you happen to know that you don't have any
rounding in the algorithm - but then there's no point in using floats in
the first place, it's the precise purpose of floats that you can
continue to compute even if you have to throw away the least significant
bits).

In other words, the One True Equality simply doesn't exist. There are
equivalence classes, that's all.

Well, OK, for integers, characters, and finite lists over them, standard
equality does make sense. Usually anyway.
The concept just doesn't transfer very well to floating-point.
Or, for that matter, to arbitrary abstract data types.

Regards,
Jo

M E Leypold

unread,
Nov 11, 2006, 3:47:59 PM11/11/06
to

Joachim Durchholz <j...@durchholz.org> writes:

> Marcin 'Qrczak' Kowalczyk schrieb:
> > Dirk Thierbach <dthie...@usenet.arcornews.de> writes:
> >
> >> And that's a problem with the IEEE standard, which requires that
> >> (and the FPU, which implements it). The standard just wasn't
> >> designed by people who had referential transparency in mind.
> > Another point of view is that arithmetic equality should be
> > distinguished from equivalence.
>
> Actually equality is a rather vaguely defined concept in programs and
> in ADTs, and it can change depending on the perspective you're taking.

<...>


>
> In the case of floating-point numbers, different algorithms call for
> different kinds of equalities, and that's why IEEE distinguishes +0.0
> and -0.0 (and various kinds of NaN values and infinities).

> Just because two floats print the same, this doesn't mean they are the same.

I agree with that so far. I think the problem with +/- 0.0 is not
something that has something to do with phyiscal equality. The problem
is, that the distinction between +0.0 and -0.0 makes (addition, real
numbers) into something that is not even a group (in the group theoretical sense).

A group has only on neutral element.

Let (-a) denote the inverse element of a.

Then assume e1 and e1 are both neutral elements, that is,

a + e1 = a = e1 + a
a + e2 = a = e2 + a

That means, that

a + e2 = a + e1

Apply now -a to both sides of the equation, then

[left] (-a) + a + e2 = e2
[right} (-a) + a + e1 = e1

and therefore e1 = e2.

So if a + (-0.0) = a and a + (+0.0) = a then (+0.0) must be equal to (-0.0).

If one wants to hold that +0.0) /= (-0.0) then one has to give up some
arithmetic laws, either

1. a = a
2. a + (+0.0) = a
3. a + (-0.0) = a
4. The existence of the inverse -a for every a.

Giving up any of these has serious consequence for addition as a
mathematical operation: Then ( addition, real numbers ) is not a group
any more and _any_ numerical law derived for real numbers is not valid
or at least needs to be checked.

(I admit due to the limited precision of float point arithemtic every
numerical algorithm has to be checked anyway, but the +/- 0.0 issue
has nothing to do with limited precision but seems like a flaw in the
logic)

I'm appalled.


Regards -- Markus



M E Leypold

unread,
Nov 11, 2006, 4:58:36 PM11/11/06
to

M E Leypold <development-2006-8...@ANDTHATm-e-leypold.de> writes:
>
> If one wants to hold that +0.0) /= (-0.0) then one has to give up some
> arithmetic laws, either
>
> 1. a = a
> 2. a + (+0.0) = a
> 3. a + (-0.0) = a
> 4. The existence of the inverse -a for every a.

Oh no. I should've thought a bit longer before posting. One could also
give up associativity and indeed it is not always that

( a + b ) + c = a + ( b + c )

when adding float point numbers with limited precision.

I wonder wether a formal mathematical model exists for that kind of
non-associative (not-) groups? Is there a name for that kind of
structure?

Regards -- Markus


Aaron Denney

unread,
Nov 12, 2006, 2:54:49 PM11/12/06
to

If left and right multiplication are still one-to-one and onto maps,
then these are quasigroups. This implies that (left and right) division
is well defined, but need not be the same as multilpication by any sort
of inverse elements. Indeed, quasigroups need not have an identity. If
they do, they are known as loops.

But floating point addition is not one-to-one and onto, so the most you
can say is that it's a groupoid (or a magma, but that's a really stupid
name, so I can't recommend its use.)

http://en.wikipedia.org/wiki/Magma_%28algebra%29

--
Aaron Denney
-><-

Joachim Durchholz

unread,
Nov 12, 2006, 4:01:19 PM11/12/06
to
M E Leypold schrieb:

>
> Joachim Durchholz <j...@durchholz.org> writes:
>
>> Just because two floats print the same, this doesn't mean they are the same.
>
> I agree with that so far. I think the problem with +/- 0.0 is not
> something that has something to do with phyiscal equality. The problem
> is, that the distinction between +0.0 and -0.0 makes (addition, real
> numbers) into something that is not even a group (in the group theoretical sense).

Multiple neutral elements aren't the real problem, that's just a symptom
of IEEE floating-point addition violating other group laws, such as:
* It is not associative (due to round-off errors).
* Not all elements have an inverse (the NaNs).

> (I admit due to the limited precision of float point arithemtic every
> numerical algorithm has to be checked anyway, but the +/- 0.0 issue
> has nothing to do with limited precision but seems like a flaw in the
> logic)

No flaw. It's just not a group, that's all.
That's also behind all the strange rules that you have to obey when
dealing with floats. (E.g. always allow for an epsilon when comparing
for equality.)

> I'm appalled.

You're appalled that IEEE floating-point arithmetic doesn't suit your
expectations.
However, that's just your preconceptions. The purpose of IEEE isn't
being a group, its purpose is giving very precise guarantees about
round-off errors; actually a lot of fine mathematical theory went into
it. (The pre-IEEE days saw a lot of theoretically unsound floating-point
computer hardware, and IEEE obsoleted that quite quickly.)

If you want to know more: there's an essay named "What every computer
scientist should know about floating-point", or similar (google for it,
you'll find it). It's written quite lucidly, explains the specific
challenges and trade-offs that are involved in designing floating-point
arithmetic.

Regards,
Jo

0 new messages