Interval arithmetic

Other scientific, philosophical, mathematical etc. topics go here.

Interval arithmetic

Postby mr_e_man » Tue Mar 05, 2024 4:30 am

Interval arithmetic is often concerned with large intervals (or, in more than one dimension, rectangular boxes), which can cover an entire region of the number line (or region of space).

Ball arithmetic is a specialized form of interval arithmetic, more concerned with tiny intervals (or balls), meant to contain individual numbers (or points in space).



Suppose for instance that we're calculating the square root of 2, and we find that it's contained in the interval

[1.41421356237309504880168836, 1.41421356237309504880168874].

Since most of the digits are the same, this data can be compressed to about half the size, by specifying the interval's midpoint and radius instead of its two endpoints:

1.41421356237309504880168855 ± 0.00000000000000000000000019
1.41421356237309504880168855 ± 19*10^-26
1.41421356237309504880168855~19

The tilde notation means that the next few digits and the previous few digits have the same place-values (powers of ten). Some people use parentheses instead:

1.41421356237309504880168855(19)

This conflicts with other more common uses of parentheses, and it has one more character than with the tilde.

We can take digits from the midpoint and put them into the radius, e.g. replacing the last part '55~19' with '50~24', thus expanding the interval. The true value of √2 is still enclosed. We can also round up the radius, again expanding the interval. And of course we can add or remove any number of trailing zeros, if it's the same number for both the midpoint and the radius.

1.41421356237309504880168855~19
1.41421356237309504880168850~24
1.41421356237309504880168850~30
1.4142135623730950488016885000~3000
1.4142135623730950488016885~3

We cannot round down the radius, or the true value might escape from the interval:

1.41421356237309504880168850~24
1.41421356237309504880168850~20
[1.41421356237309504880168830, 1.41421356237309504880168870]



Usually, when a number is written without error bounds, such as π = 3.1416, it's implied that the error (i.e. the radius) is no more than 0.5 ULPs (Units in the Last Place):

3.14160~5
[3.14155, 3.14165]

But this lazy approach, not writing the error explicitly, only allows scaling the radius by a power of ten. If we're rounding the radius up and never down, it rapidly leads to loss of precision. For example, suppose we're adding √2+√3+√5, given just a few digits of each:

√2 = 1.4142
√3 = 1.7321
√5 = 2.2361
sum = 5.3824

This result is misleading, since the error is more than 0.00005. You might think we could round it to 5.382, but even that is questionable. Here's what's actually happening:

√2 = 1.41420~5
√3 = 1.73210~5
√5 = 2.23610~5
sum = 5.38240~15

If we throw away the digit 4, then it must be absorbed into the radius:

5.38200~55

Now the radius is greater than 0.0005, so we shouldn't write 5.382. We could throw away two digits:

5.38000~255
5.38000~500
5.380~5
"5.38"

Or we could write the radius explicitly; even a single digit there improves the result by a factor of 25, in this case:

5.38240~15
5.38240~20
5.3824~2
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby mr_e_man » Tue Mar 05, 2024 4:41 am

I've implemented ball arithmetic in Python.

I decided to use decimal rather than binary, to make input and output simple and lossless. If binary is converted to decimal then information is lost; if it's not converted then you can't read it. Binary is computationally more efficient, and I wish people would use it more (or a compressed variant like hexadecimal), but they don't. Everyone (including me) is much more familiar with decimal. I want this module to be easy for them to use.

And that ease includes not having to give me credit, or worry about implicit copyrights, or put any particular license on their own programs. So I'm putting the Zero-Clause BSD License on this.

To evaluate most functions, I used Taylor series. I wanted to use Newton's method, at least for 1/x and 1/√x (and thereby √x = (1/√x)*x) because those functions are more common, but I have to figure out the details. It's easier for me to work with series.

Anyway, here's my Python code.
Attachments
decball.txt
(47.48 KiB) Downloaded 39 times
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby mr_e_man » Tue Mar 05, 2024 6:31 am

At first I didn't intend to support high precision (hundreds of digits), but Python encouraged it. It's still not efficient; e.g. it (decball.py) often recalculates π or log(10) with varying precision, even if it was calculated before with higher precision.

You can do fun things like computing the tangent of a googol radians, to 60 digits:

Code: Select all
>>> tan(10**100 + Ball(0, 1, -60))
0.40123196199081435418575434365329495832387026112924406831944154~161

(The '>>>' lines are input, and the other lines are output.)

You can't determine whether π^(π^(π^π)) is an integer, but you can determine whether 2.5^(2.5^(2.5^2.5)) is an integer:

Code: Select all
>>> x = B("2.5000000~1")
>>> exp(log(x)*x)  # That is x**x
9.88211769~216
>>> exp(log(x)*exp(log(x)*x))  # That is x**(x**x)
8560.3200~297
>>> exp(log(x)*exp(log(x)*exp(log(x)*x)))  # That is x**(x**(x**x))
3.1176~885e3406

This is larger than 10^3400, which suggests redoing the computation with 3500 digits. It'll take a while, and we don't want to print all those digits, so let's store the results in variables:

Code: Select all
>>> x = B("2.5" + "0"*3500 + "~1")
>>> y = log(x)
>>> z = exp(y*x)  # x**x
>>> z = exp(y*z)  # x**(x**x)
>>> z = exp(y*z)  # x**(x**(x**x))
>>> z - floor(z)
0.2650151911484744424~657

This shows that it's definitely not an integer.
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby mr_e_man » Tue Mar 19, 2024 9:40 pm

Three digits in the radius is a bit too much for readability.
(And as I said before, I don't want to round the displayed value and lose information; it should show the actual value the computer uses.)

But if I reduce it to two digits everywhere, including internal calculations, the results are significantly worse.
Consider these calculations of √(1/9):
Code: Select all
>>> for i in range(1, 10):
...     Ball.radius_precision = i
...     print(i, ":", sqrt(B("0.111111111111~1")))
...
1 : 0.4~3
2 : 0.33333333336~13
3 : 0.33333333333325~349
4 : 0.333333333333175~1716
5 : 0.3333333333331677~15229
6 : 0.33333333333316676~150243
7 : 0.333333333333166669~1500260
8 : 0.3333333333331666673~15000277
9 : 0.33333333333316666669~150000289
Notice that dropping from 3 to 2 digits increases the radius by a factor of 13000/349, whereas dropping from 4 to 3 only increases it by 3490/1716, and beyond that it quickly stabilizes to a factor of 1.

This is related to the number of terms used in the Taylor series. Let's modify the square root function to show this number:
Code: Select all
def _taylor_sqrt1m(x):
    ...
    # End of 'while' loop
    print("(", m//2, " terms:)", sep="", end=" ")
    return result

Code: Select all
>>> for i in range(1, 10):
...     Ball.radius_precision = i
...     print(i, ":", sqrt(B("0.111111111111~1")))
...
(8 terms:) 1 : 0.4~3
(152 terms:) 2 : 0.33333333336~13
(201 terms:) 3 : 0.33333333333325~349
(217 terms:) 4 : 0.333333333333175~1716
(236 terms:) 5 : 0.3333333333331677~15229
(256 terms:) 6 : 0.33333333333316676~150243
(274 terms:) 7 : 0.333333333333166669~1500260
(292 terms:) 8 : 0.3333333333331666673~15000277
(312 terms:) 9 : 0.33333333333316666669~150000289
The number of terms is around 200.
If the radius only holds 2 digits, then adding all these terms makes it overflow and get rounded up, and the minimum allowed increase in the radius becomes ten times larger, so any further added terms will make the problem even worse.
If the radius holds 3 digits, this is less likely to happen.


So, I think different variables should have different degrees of precision. Ordinary variables could have 2 digits in the radius. The main variable used in a very long calculation could have 4 digits, or even 10, in case you want to add a billion terms. And at the end of the calculation it could be reduced to 2.
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby quickfur » Tue Mar 26, 2024 12:15 am

Working with decimal digits or interval arithmetic isn't always the answer. Sometimes, depending on what you're doing, you can get away with exact integer arithmetic instead.

For example, by working in a suitable extension field, you can manipulate numbers of the form (p+q√2) using only integer arithmetic and get exact answers without any roundoff errors. This can be done for all field operations (addition, subtraction, multiplication, division). In fact you can do this for any algebraic number of degree n. There's a theorem in algebra that you can perform exact arithmetic with algebraic numbers of degree n using vectors of 2n integer coordinates.

Only, the complexity of implementing such arithmetic is exponential in n, so for practical purposes this is really only useful for n≤3 or somewhere thereabouts. And unfortunately, numbers like pi or e are transcendental, and so do not admit such a representation.

For things like computing coordinates for n-cubes, n-cube truncates, and so on, it is sufficient to perform computations in Q[√2], i.e., manipulate numbers of the (p+q√2). Similarly, computing coordinates for the pentagonal polytopes can be done in Q[√5] (which is isomorphic to Q[phi] where phi is the golden ratio), the elements of which are vectors of the form (p+q√5) (equivalently, (p+q*phi)). While you're limited to numbers of this form (as opposed to arbitrary real numbers), you gain the huge advantage that all arithmetic is exact, and there is no roundoff error. Note that p and q can be rational numbers too, since the rationals themselves can be exactly manipulated as pairs of integers representing p/q.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Interval arithmetic

Postby mr_e_man » Fri Mar 29, 2024 2:22 am

Right.

Except...
quickfur wrote:There's a theorem in algebra that you can perform exact arithmetic with algebraic numbers of degree n using vectors of 2n integer coordinates.
:\
I think only n rational coordinates are needed, or up to n2 if the two numbers are in different fields.
(As you noted, rational vs. integral is no issue; n rational numbers can be encoded as 2n integers, or just n+1 integers using a common denominator.)


This can be done for all field operations (addition, subtraction, multiplication, division).

Exact algebra can even handle some trigonometry, even though sine and cosine are transcendental functions. For most angles we're interested in, the cosine is an algebraic number, and it follows that the sine is as well, since (cosθ)²+(sinθ)²=1. Two angles can be added using

cos(θ+ϕ) = cosθ cosϕ - sinθ sinϕ,
sin(θ+ϕ) = sinθ cosϕ + cosθ sinϕ,

or alternatively

exp(iθ) = cosθ + i sinθ,
exp(i(θ+ϕ)) = exp(iθ) exp(iϕ).


But how do you compare two algebraic numbers? E.g. how can you tell whether one is positive or negative?
It is doable if you only have square roots (possibly nested). But I don't see anything you can do with higher degrees.
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby quickfur » Mon Apr 01, 2024 10:00 pm

mr_e_man wrote:Right.

Except...
quickfur wrote:There's a theorem in algebra that you can perform exact arithmetic with algebraic numbers of degree n using vectors of 2n integer coordinates.
:\
I think only n rational coordinates are needed, or up to n2 if the two numbers are in different fields.
(As you noted, rational vs. integral is no issue; n rational numbers can be encoded as 2n integers, or just n+1 integers using a common denominator.)

Hmm you're right, you don't need 2n coordinates, n is sufficient if the numbers are in the same field, n2 if they are in different fields. Mea culpa.

[...]
But how do you compare two algebraic numbers? E.g. how can you tell whether one is positive or negative?
It is doable if you only have square roots (possibly nested). But I don't see anything you can do with higher degrees.

This is a very good question, and one I've grappled with for a while now. I have managed to figure out how to do it for the golden ratio, and probably for arbitrary roots of quadratics by extension. Basically it revolves around finding polynomial factors that annihilate the linear term in your polynomial, so that you end up with a square on one side which must be non-negative, and a linear expression on the other side. Use the inequality of rationals and the presumed irrationality of the algebraic number to exit early from your decision tree in the more obvious cases.

For cubics it's more tricky. I haven't completely worked it out yet, but I did get close enough that I think you could do it in a similar way: set up a system of linear equations on unknown coefficients of a cubic that will annihilate certain terms in your original cubic on the algebraic root, solve the system for the coefficients, then use the result to simplify your original cubic until it reduces to something you can trivially tell the sign of.

I'm fairly confident this could be done for quartics as well, the intermediate expressions just get more hairy and you may need to deal with more special cases in your decision tree. But I'm not 100% whether quintics can be resolved this way, given the Abel-Ruffini theorem. Haven't really got that far yet. :D

//

One issue with this scheme of exact arithmetic of algebraic numbers is that the coefficients involved can grow very quickly, and if implemented on the computer, you will have to be careful about coefficient overflow issues. An algebraic number of degree 2, for example, can cause coefficients to double in binary representation size per multiplication, meaning that if you start out with say a coefficient that fits in 2 bits (0-3), after 6 multiplications with operands of similar coefficient size it may potentially exceed the capacity of a 32-bit integer, and after 7-8 multiplications it may exceed the capacity of a 64-bit integer. This is the extreme case, of course; usually if you use a rational representation there will likely be common factors that can be eliminated to keep the coefficients within range. However, this may not always be true when given pathological input.

This situation rapidly worsens for algebraic numbers of higher degrees; a degee 4 or degree 5 representation may overflow so quickly that one would be forced to use BigInteger representations in order to prevent integer overflows from giving you the wrong answers. Using BigInteger representations, of course, introduce overhead and can significantly slow down your computations, such that in many cases putting up with floating-point roundoff errors may be more desirable. But if exact arithmetic is paramount, then this is the price you pay. :P
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Interval arithmetic

Postby mr_e_man » Mon Apr 01, 2024 11:38 pm

But you don't have to put up with floating-point roundoff errors, either. :) You can catch the errors in intervals, to make sure they don't cause any trouble.
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby mr_e_man » Tue Apr 02, 2024 12:00 am

mr_e_man wrote:It is doable if you only have square roots (possibly nested).

quickfur wrote:I have managed to figure out how to do it for the golden ratio, and probably for arbitrary roots of quadratics by extension. Basically it revolves around finding polynomial factors that annihilate the linear term in your polynomial, so that you end up with a square on one side which must be non-negative, and a linear expression on the other side. Use the inequality of rationals and the presumed irrationality of the algebraic number to exit early from your decision tree in the more obvious cases.

Here's what I had in mind.

First note that x < y is equivalent to x - y < 0, and we know how to subtract. So we only need to determine whether numbers are positive or negative.

Suppose x,y,z are numbers in some field F where you already know how to compare. Then x + y√z is an arbitrary number in the extension field F(√z). (Assume √z is not in F, and z is positive.) We want to determine the sign of this, i.e. to compare x+y√z <=> 0.
It's positive if x and y are both positive (or if one is positive and the other is 0).
It's negative if x and y are both negative (or if one is negative and the other is 0).
If x > 0 > y, then the inequality is equivalent to x <=> -y√z, and both sides are positive, so we can square and get x² <=> y²z, that is, x² - y²z <=> 0.
If x < 0 < y, then the inequality is equivalent to -x >=< y√z, and both sides are positive, so we can square and get x² >=< y²z, that is, x² - y²z >=< 0.

Thus, if you can compare in F, then you can also compare in F(√z).
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby quickfur » Tue Apr 02, 2024 7:22 pm

For a simple quadratic extension Q(√r) with square-free r>1, the procedure is very simple:

Given an input number of p+q√r, its sign is computed thus:

1. If either p or q is zero, simply return the sign of the non-zero coefficient.

2. If both p and q are of the same sign, return that common sign. This is the early exit simple case.

3. If p < 0 and q > 0, then the sign is the same as the sign of q2·r - p2.

4. Otherwise, p > 0 and q < 0; in which case the sign is the same as the sign of p2 - q2·r
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Interval arithmetic

Postby quickfur » Tue Apr 02, 2024 7:36 pm

As for generalization to higher degree algebraics: notice that the expressions (p² - q²r) or (q²r - p²) are the result of multiplying (p + q√r) by its conjugate. This has the effect of eliminating the radical, thus making the sign computable with integers alone.

I haven't confirmed this, but I suspect that a similar procedure can be used for determining the sign of a cubic algebraic number (p + q·s1/3 + r·s2/3). IIRC there isn't a single conjugate in the cubic case, but two potentially different conjugates. Probably some combination of multiplying by these two conjugates would eliminate the cube roots and squared cube roots, yielding an expression whose sign can be determined by integer arithmetic alone.

If this method works, then I'd wager that it should also work in the quartic case.

However, I'm not so sure about the quintic case, because of Abel-Ruffini. There might potentially be cases for which the required co-roots can't be computed directly with integer arithmetic alone, and you might have to resort to more advanced techniques. But I could be wrong.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North

Re: Interval arithmetic

Postby mr_e_man » Wed Apr 03, 2024 12:26 am

quickfur wrote:For a simple quadratic extension Q(√r) with square-free r>1, the procedure is very simple:

[...]

:lol: That's what I just said.

But the base field doesn't need to be ℚ; the base field itself can be a quadratic extension of ℚ.

You can have a chain of quadratic extensions, and compute signs recursively using your procedure. For example, you can determine the sign of

5 - 5√3 √[2 - √3] + (4 - 12√3 + √[2 - √3]) √[-2√3 + 7√[2 - √3]],

by considering the chain ℚ ⊂ ℚ(√3) ⊂ ℚ(√[2 - √3]) ⊂ ℚ(√[-2√3 + 7√[2 - √3]]).


quickfur wrote:I haven't confirmed this, but I suspect that a similar procedure can be used for determining the sign of a cubic algebraic number (p + q·s1/3 + r·s2/3). IIRC there isn't a single conjugate in the cubic case, but two potentially different conjugates. Probably some combination of multiplying by these two conjugates would eliminate the cube roots and squared cube roots, yielding an expression whose sign can be determined by integer arithmetic alone.

Indeed, the product of all three roots is just the constant term in the minimal polynomial. See Vieta's formulas.

However, the minimal polynomial doesn't determine the sign of the algebraic number, even in the quadratic case: If x² - 2 = 0, then x may be positive or negative. It's worse for higher degrees.

The polynomial x⁵-8x+1 has two positive roots, one negative root, and two complex roots. How do you even specify one root? What data do we need in addition to the polynomial?

Of course you could isolate the root in an interval (the main topic here), but the interval isn't unique, and you brought up algebraic numbers specifically as a way to avoid using intervals.

You could just label the roots 1,2,3 in order (from negative to positive, ignoring the complex roots). Also I've heard of something called Thom encoding. But I don't know how difficult these would be to compute with.
ΓΔΘΛΞΠΣΦΨΩ αβγδεζηθϑικλμνξοπρϱσςτυϕφχψωϖ °±∓½⅓⅔¼¾×÷†‡• ⁰¹²³⁴⁵⁶⁷⁸⁹⁺⁻⁼⁽⁾₀₁₂₃₄₅₆₇₈₉₊₋₌₍₎
ℕℤℚℝℂ∂¬∀∃∅∆∇∈∉∋∌∏∑ ∗∘∙√∛∜∝∞∧∨∩∪∫≅≈≟≠≡≤≥⊂⊃⊆⊇ ⊕⊖⊗⊘⊙⌈⌉⌊⌋⌜⌝⌞⌟〈〉⟨⟩
mr_e_man
Tetronian
 
Posts: 490
Joined: Tue Sep 18, 2018 4:10 am

Re: Interval arithmetic

Postby quickfur » Thu Apr 04, 2024 8:07 pm

Using intervals to isolate a root is not a problem. You can use rational numbers for this so you still only need integer arithmetic. It doesn't have to be unique as long as it indeed isolates the root. What's important is that it lets you determine the sign of certain expressions that would otherwise be indeterminate, such as the x²-2 = 0 case you gave. Isolating a specific root (whether by intervals or otherwise, it doesn't really matter as long as there's some property that can be used for differentiating between the roots) gives you the specific information you need to resolve these cases. Basically, when faced with an ambiguous case in your decision tree (e.g. an expression E where substituting one root vs. another would give opposite signs), you use the isolating property to help you choose the correct branch, thereby allowing you to arrive at the result.

For example, if your polynomial is x²-2 = 0 and you're trying to determine the sign of (p² + q²x), then, by assuming that you want the positive root x, you can deduce that (p + qx) is positive. Without this extra information that isolates the root, this deduction would not be possible.
quickfur
Pentonian
 
Posts: 2955
Joined: Thu Sep 02, 2004 11:20 pm
Location: The Great White North


Return to General

Who is online

Users browsing this forum: No registered users and 2 guests