Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given unlimited computing power, AIXItl [1] is a pretty simple algorithm that can provably solve all problems, in some sense, at least as well as any other algorithm. The idea is to simply dovetail over all possible algorithms and select the ones that fit observations best. That includes humans, if you believe as I do that humans are computable.

With limited computing power, it's likely that the best algorithms for many different problems (possibly including "intelligence" however you define it) won't be simple, just like the best known algorithms for integer multiplication [2] aren't simple. In particular, AIXI variants will be hard to approximate, precisely because they dovetail over all possible algorithms.

That said, it's very likely that the best algorithms for intelligence will be simpler and faster than humans, because humans are the stupidest possible creatures that can build a civilization (otherwise it would've happened earlier in our evolution).

[1] http://www.hutter1.net/ai/paixi.htm [2] https://en.wikipedia.org/wiki/F%C3%BCrer%27s_algorithm



>Given unlimited computing power, AIXItl [1] is a pretty simple algorithm that can provably solve all problems at least as well, in some sense, as any other algorithm.

Technically, AIXI_{tl} solves all problems at least as well as any other algorithm in a given order of asymptotic time and length, those being the t and l. The problem is that it has a "trivial additive constant" which will make it take longer than the lifetimes of many stars to achieve its much-vaunted "optimal" asymptotic performance.

In short, AIXI is the most utterly brute-force kind of "intelligence" you could possibly build, whose notion of "scaled down" bounded-rational inference is still astronomically intractable. It basically just amounts to throwing computing power at the problem, more computing power than anyone actually has.

Mind, Schmidhuber and Hutter still get massive props for managing to cut out the philoso-wank that usually accompanies discussions of cognition and instead saying, "Let's just pose a very general inference problem and write an algorithm that solves it."


Yeah, that's why I said "given unlimited computing power". But theoretical work sometimes has a way of becoming practical and scary. Check out the work of Joel Veness on Monte Carlo AIXI, which learned to play Pac-Man on a single desktop computer, presumably faster than "the lifetimes of many stars".


Writing a bot that plays Pac-Man is many things, but I wouldn't call it "scary". Especially considering the rules of Pac-Man had to be imperatively coded in relative detail [1], that some of the utilities were factored out into a basic reinforcement learning agent doesn't immediately convince me we've hit AGI at all.

Having to rely on playing an arcade game is a weak case in general. We don't call Counter-Strike modders AI researchers. The fear angle is nothing but woo, but I'd like to hear you elaborate nonetheless.

[1] https://github.com/moridinamael/mc-aixi/blob/master/src/pacm...


>Especially considering the rules of Pac-Man had to be imperatively coded in relative detail [1]

See, chessboards are meaningless outside of human experience. You only know how to play pac man without explicit instructions because you have a general concept of a game and what games look like when you win or lose them. Whenever you try to solve an AI problem in a narrow domain like this then you necessarily have to compensate by giving it a miniature model of the specific problem being solved.

Humans after all don't just intuitively know the rules of games before playing them. You also cannot just describe the rules of the game to the AI because the AI does not understand language.


I understand this. I'm also saying this is done pretty routinely in the video game industry from everything to the most basic pathfinding AI to more complex agents that factor in the terrain, particle effects and other aspects of the environment hardcoded into their processes to exhibit "intelligent" behavior, though typically only within that box. Game modders then engage in plenty of work from reverse engineering to what can realistically even be called AI programming.

MC-AIXI seems to be more generic because it factors this out into a reinforcement learner, but this in of itself is hardly a cause for fear and that AGI is coming, as the GP alluded. I'd wager today's AAA titles have some damned complicated heuristics of their own.


I thought by "scary" he meant "impressively efficient, since this ran on a desktop, while most ML algorithms take huge amounts of server space."


AIXI solves a useful class of problems, but definitely does not solve all problems [1].

The main class of problems it fails to solve are ones where it has to model itself as an agent in the world.

For example, AIXI is a bit accidentally-suicidal. Its world models have no term for "I am here"; e.g. it will fail to realize that disconnecting the power to save money would stop the inference process instead of informing it. It needs many many examples of dying in order to learn to avoid it, but you only get one chance. (Evolution worked around this problem by making copies, and preferentially keeping copies that were less accidentally suicidal.)

1: http://lesswrong.com/lw/jg1/solomonoff_cartesianism/


I know. I'm cousin_it on LW and have done a ton of math work on this exact problem :-)


because humans are the stupidest possible creatures that can build a civilization (otherwise it would've happened earlier in our evolution

This would be true if intelligence were the only requirement of building a civilization.

It's entirely possible there is some fish at the bottom of the ocean smarter than humans. But it's hard to build a civilization using fins.


I saw Shane Legg's talk recently and I understood the main idea but I had a question I wish I could ask him:

He said general intelligence is 'as long as you provide a goal, the intelligence should optimize over that goal based on experience'.

But my problem is: general intelligence is supposed to come up with its own goals.

Take for a example a couch potato who has some steady stream of monthly stipend, and sits in front of tv all day. This person is in possession of a functioning general intelligence yet has no goal. One day this person decides to assign to self a goal of going out and getting a degree and so on and so forth. That's also part of that person's general intelligence.

It's not clear to me that AIXI is self-driven in that sense. It would be great if someone could comment on that.


I think the idea is to build something usefull (learn from experience and optimize goals)instead of something dangerous (own goals). And I dont think that own goals are a neccessary condition for general inteligence. But it depends on the definition of general intelligence, of course.


Check out my reply on "causal entropic forces". Basically, maximizing future possibilities works really well for intelligence.


> humans are the stupidest possible creatures that can build a civilization

Kind of a tangent, but there are conditions that are not easily described as "intelligence" that are necessary for building civilization. Namely, the ability to cooperate in very large groups. Chimpanzees outperform human 4 year-olds in a variety of intelligence tests, but adult chimpanzees are not observed to cooperate to the same degree as humans, or in very large groups. There is also some archaeological evidence to suggest that this is one of the reasons humans were able to outcompete Neaderthals - larger tribes. Individual Neanderthals may well have had higher IQs than their contemporary human rivals.


> In particular, AIXI variants will be hard to approximate, precisely because they dovetail over all possible algorithms.

For this reason, I suspect that invoking AIXItl is begging the question. Positing unlimited computing power allows one to sidestep the difficult question: which algorithm(s)? In fact, this is not just a difficult question, it is the question (the one that AI research is trying to answer.) Absent the convenience of unlimited computing power, I suspect that AIXItl is a dead end, no matter how elegant it is.


It did happen, several times! But the randomness of weather, volcanoes, disease etc squashed them. Ours is just the lucky combination of critical mass and congenial weather patterns.

On the other hand, Neanderthal had quite a bigger brain than us; but fewer of the social impulses. We're likely dumber but more socialized, which reinforces your theme I guess!


> humans are the stupidest possible creatures that can build a civilization

Many insects have quite elaborate 'civilizations' or at least colonies, and it's doubtful that each individual bug is very intelligent.


AIXItl though has the problem of self destruction. It is still modeled as an agent an so it would eventually destroy itself during its dovetailing. The agent model of AI is a bit too dualistic to be correct.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: