Hacker Newsnew | past | comments | ask | show | jobs | submit | reikonomusha's commentslogin

It is not known, and the model problem for this is Hilbert's 13th [1].

Nonetheless, "elementary function" is a technical term dating back to the 19th century; it's very much not a general adjective whose synonym is "basic".

[1] https://en.wikipedia.org/wiki/Hilbert%27s_thirteenth_problem


Thanks, actually https://en.wikipedia.org/wiki/Elementary_function confirms your claim.

Nevertheless, it is a horrible definition. Mathematicians have often taken care to define things as close to everyday intuition as they could (and then proving an equivalence). The "elementary function" in this definition is just a weird mix of concerns.


Elementary function is also a general English phrase that can very much be used to represent the functions on a scientific calculator.

To be fair to Spivak, he did say it was comprehensive introduction. :)

EML can represent the real absolute value, so long as we agree with the original author's proviso that we define log(0) and exp(-∞), by way of sqrt(x^2) as f(x) = exp((1/2)log x). Traditionally, log(0) isn't defined, but the original author stipulated it to be -∞, and that all arithmetic works over the "extended reals", which makes

    abs(0)
    = f(0)            ; by defn
    = exp(1/2 log 0)  ; by defn
    = exp(-∞/2)       ; log 0 rule
    = exp(-∞)         ; extended real arith
    = 0               ; exp(-∞) rule
If we don't agree with this, then abs() could be defined with a hole punched out of the real line. The logarithm function isn't exactly elegant in this regard with its domain restrictions. :)

It's ok for elementary functions to have singularities, like 1/x at x=0. But I'm not sure what happens with your version of abs, since the log function has branches. log(1) is any of 0, 2*pi*i, 4*pi*i, etc.

You are correct, it is undecidable by Richardson's theorem [1].

[1] https://en.wikipedia.org/wiki/Richardson%27s_theorem


that result does not apply for EML: EML doesn't have the | . | absolute value function, a prerequisite for Richardson's theorem.

Yes it does; you can build the absolute value as sqrt(x²), and sqrt(x) and x² are both constructible using eml.

If I understand the page correctly, the extension by Miklós Laczkovich should be enough to show that it's undecidable.

You wrote:

> It's decidable whether two NAND circuits implement the same function, I'm pretty sure it's not decidable if two EML trees describe the same function.

Perhaps, perhaps not, same function so basically is this question solvable:

A(x[,y,...]) = f(x[,y,...])-g(x[,y,...]) == 0 everywhere?

if a user brings EML functions f and g; given their binary EML trees; can we decide if they represent the same function, so the question form is

A(x)=0 EVERYWHERE?

(like given 2 fractions a/b == c/d ? do the fractions represent the same fraction?)

From Wikipedia link reikonomusha gave:

> Miklós Laczkovich removed also the need for π and reduced the use of composition.[5] In particular, given an expression A(x) in the ring generated by the integers, x, sin xn, and sin(x sin xn) (for n ranging over positive integers), both the question of whether A(x) > 0 for some x and whether A(x) = 0 for some x are unsolvable.

Here the question forms are

1) exist x such that A(x) > 0 (does there exist an x where A(x) becomes positive?)

2) exist x such that A(x) = 0 (does there exist a value such that A(x) becomes 0? or basically find real roots

so at least the forms on WikiPedia don't generate the results both of you claim it does.

it does present undecidability results, but not straightforwardly in the context of this EML work.

second the Richardson's theorem is about the function on the reals, not complex functions (I mean the roots must lay somewhere)


> an expression in the ring generated by the integers, x, sin xn, and sin(x sin xn)

We can always write AML trees for expressions generated by the integers, x, sin xn, and sin(x sin xn), right?

So we should be able to write EML trees for any two such expressions, A and B. If they're equal everywhere, then A - B = 0 everywhere. A - B is also in the aforementioned ring.

If there was a decision procedure always to determine if EML trees represent the same function, then that contradicts Miklós Laczkovich's extension, right?


no Miklós Laczkovich's extension as described on wikipedia only says that both of the following questions are proven undecidable:

1) is there some value x such that some function F(x)=A(x)-B(x)=0?

2) is there some value x such that F(x)>0?

while you asked:

> I'm pretty sure it's not decidable if two EML trees describe the same function.

that would be

3) is for every x F(x)=A(x)-B(x)==0?

which Miklós Laczkovich's extension does not provide.

And you ignore the fact that Miklós Laczkovich's extension applies to real numbers and functions...


If it's undecidable whether it's 0 at even ONE point, clearly you can't prove that it's 0 everywhere.

Likewise, if it's not decidable for real-valued functions, clearly it's not decidable for complex valued functions.


thats not how this works

decidability does not distribute over pointwise question asking on sets, or if you believe it does, show us the proof.

Telling if an EML(x,y),1 constructed expression is identically 0 is in the gray zone, as far as I can tell, it has neither been proven decidable nor been proven undecidable.

Nevertheless regardless of decidability the authors clearly show the multipoint sampling/testing is a decent filter, and the shorter resulting expressions have been proven correct in the results for the construction at least.


Arnold (as reported by Goldmakher [1]) does prove the unsolvability of the quintic in finite terms of arithmetic and single-valued continuous functions (which does not include the complex logarithm). TFA's result is stronger, which is something about the solvability of the monodromy groups of all EML-derived functions. So it doesn't seem to be a "rehash", even if their specific counterexample could have been achieved either in fewer steps or with less machinery.

[1] https://web.williams.edu/Mathematics/lg5/394/ArnoldQuintic.p...


Arnold's proof can be used to show that certain classes of functions are insufficient to express a quintic formula.

These classes can always safely include all single-valued continuous functions (you cannot even write the _quadratic_ formula in terms of arithmetic and single-valued continuous functions!), but also plenty of non-single-valued functions (e.g. the +-sqrt function which appears in the well-known quadratic formula).

Applying Arnold's proof to the class given by arithmetic and all complex nth root functions (also multivalued) gives the usual Abel-Ruffini theorem. But Arnold's proof applies to the class "all elm-expressible functions" without modification.


Wikipedia says:

> More generally, in modern mathematics, elementary functions comprise the set of functions previously enumerated, all algebraic functions (not often encountered by beginners), and all functions obtained by roots of a polynomial whose coefficients are elementary. [...] This list of elementary functions was originally set forth by Joseph Liouville in 1833.

which seems to be what the blog post references.


As is usual with these kinds of "structure theorems" (as they're often called), we need to precisely define what set of things we seek to express.

A function which solves a quintic is reasonably ordinary. We can readily compute it to arbitrary precision using any number of methods, just as we can do with square roots or cosines. Not just the quintic, but any polynomial with rational coefficients can be solved. But the solutions can't be expressed with a finite number of draws from a small repertoire of functions like {+, -, *, /}.

So the question is, does admitting a new function into our "repertoire" allow us to express new things? That's what a structure theorem might tell us.

The blog post is exploring this question: Does a repertoire of just the EML function, which has been shown by the original author to be able to express a great variety of functions (like + or cosine or ...) also allow us to express polynomial roots?


Combinations of the NAND gate can express any Boolean function. The Toffoli (CCNOT) or Fredkin (CSWAP) can express any reversible Boolean function, which is important in quantum computing where all gates must be unitary (and therefore reversible). The posited analog is that EML would be the "universal operator" for continuous functions.

The definition of "elementary function" typically includes functions which solve polynomials, like the Bring radical. The definition was developed and is most fitting in algebraic contexts where algebraic structure is meaningful, like Liouvillian structure theorems, algorithmic integration, and computer algebra. See e.g.

- Page 2 and the following example of https://billcookmath.com/courses/math4010-spring2016/math401... (2016)

- Ritt's Integration in Finite Terms: Liouville's Theory of Elementary Methods (1948)

It's not frequent that analysis books will define the class of elementary functions rigorously, but instead refer to examples of them informally.


> See e.g. Page 2 and the following example of https://billcookmath.com/courses/math4010-spring2016/math401... (2016)

There appears to be a typo in that example; I assume "Essentially elementary functions are the functions that can be built from ℂ and f(x) = x" should say something more like "the functions that can be built from ℂ and f(x) = y".


Not a typo! Think of f(x) = x as a seed function that can be used to build other functions. It's one way to avoid talking about "variables" as a "data type" and just keep everything about functions. We can make a function like x + x*exp(log(x)) by "formally" writing

    f + f*(exp∘log)
where + and * are understood to produce new functions. Sort of Haskell-y.

> The definition of "elementary function" typically includes functions which solve polynomials, like the Bring radical.

What. Does that "typical definition" of elementary function includes elliptic functions as well, by any chance?


Not that I've seen.

"The quintic has no closed form solution" is a theorem that is more precisely stated (in the usual capstone Galois proof) as follows: The quintic has no closed form solution in terms of arbitrary compositions of rational numbers, arithmetic, and Nth roots. We can absolutely express closed form solutions to the quintic if we broaden our repertoire of functions, such as with the Bring radical.

The post's argument is different than the usual Galois theory result about the unsolvability of the quintic, in that it shows a property that must be true about all EML(x,y)-derived functions, and a hypothetical quintic-solver-function does not have that property, so no function we add to our repertoire via EML will solve it (or any other function, elementary or not, that lacks this property).


Cool explanation, thanks!

Bring radicals are just cheating.

You can't solve an equation? Why not just introduce a function that is equal to the solution of the equation! Problem solved.


This fundamental "cheat" gave rise to some of the most important pure and applied mathematics known.

Can't solve the differential equation x^2 - a = 0? Why not just introduce a function sqrt(a) as its solution! Problem solved.

Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.

A lot of 19th century mathematics was essentially this: discover which equations had solutions in terms of things we already knew about, and if they didn't and it seemed important or interesting enough, make a new name. This is the whole field of so-called "special functions". It's where we also get the elliptic functions, Bessel functions, etc.

The definition of "elementary function" comes exactly from this line in inquiry: define a set of functions we think are nice and algebraically tractable, and answer what we can express with them. The biggest classical question was:

    Do integrals of elementary functions give us elementary functions?
The answer is "no" and Liouville gave us a result which tells us what the answer does look like when the result is elementary.

Risch gave us an algorithm to compute the answer, when it exists in elementary form.


That's one way to get at complex numbers and the sine function. But it's not the only one.

Eg you can get complex numbers from matrices.

But if you want to go in your direction: you can say we get fractions and negative numbers this way.


Sure. But the square root and the sine function also have nice geometric interpretations.

Bring radicals don't. They're just defined as a solution to this particular quintic.

Kinda the similar story with the Lambert function.


The Bring radical has a great geometric interpretation: BR(a) is where the curve x^5 + x + a crosses the x axis.

Like sine or exp, it also has a nice series representation:

    sum(k = 0 to inf) binom(5k,k) (-1)^(k+1) a^(4k+1) / (4k+1)
We can compute its digits with the very rapidly convergent Newton iteration

    x <- x - (x^5 + x + a)/(5x^4 + 1)
and so on.

Why not invite it to the table of functions?

Ellipses are simple and beautiful figures known to every child, but why do we rarely invite the elliptic integrals to the table too?

I guess my point is that "nice geometric interpretation" is a little subjective and hasn't led to much consistency in our choice of which functions are popular or obscure.


> The Bring radical has a great geometric interpretation

Erm... No. It's not great.

> Why not invite it to the table of functions?

Because it's too arbitrary.


> This fundamental "cheat" gave rise to some of the most important pure and applied mathematics known.

> Can't solve the differential equation y'' = -y? Why not just introduce a function sin(x) as its solution! Problem solved.

But that's not how sine was introduced. It's been around since classical geometry. It was always easy to solve the differential equation y'' = -y, because the sine had that property, and we knew that.

Heck, you can tell this just by looking at the names of the functions you mentioned. "Sine" is called "sine", which appears to have originated as an attempted calque of a Sanskrit term (referring to the same function) meaning "bowstring".

"Square root" is named after the squaring function that was used to define it.

Introducing an answer-by-definition gives us negative numbers, rational numbers, imaginary numbers, and nth roots... but not sines, come on. You can just measure sines.


You can calculate, measure, draw, construct, write a power series for, express as hypergeometric function, etc. the Bring radical too.

All of these concepts, from sine to real numbers, Bring radicals to complex exponentials, can all be defined in different, equivalent ways. What is interesting are the properties invariant to these definitions.

It still doesn't seem to me that a square root should be any more or less contrived than a Bring radical. Maybe we should call it a ultraradical instead?


For me, what makes the square root more “natural” is that, although it’s usually introduced as an “answer by definition”, it can also be arrived at by wondering what happens if you take something to the halfth power.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: