If we are only talking about creating an artificial general intelligence, this should do: Nature created a general intelligence through natural selection, which is a less powerful process than what humans are wielding (if only because humans can implement natural selection with computers and in addition have intelligent tricks that evolution doesn't have). Hence, if the human brain follows the laws of physics, it should be possible to find out how it works and implement it without all the messy parts that are bound to have accumulated over the years.
If you believe the estimates for the computing power of the human brain, we only need a few more years of growth in the dollar/flops ratio to make the required computational horsepower available to smaller organizations. In fact, supercomputers being designed today already touch the lowest estimates of brain computational capacity. So if these estimates hold, we only lack the knowledge of how to copy this process artificially. Which certainly doesn't mean that it is right around the corner, but not looking into it would be stupid.
Then, of course, the regular pro-singularity lament goes like this: Once we have created one machine that faithfully copies what the human brain does best, the exponential growth in processor capability will ensure that we soon have an army of intelligent machines. This sentiment is easy to attack. But even if processors stop getting better overnight (which is a grossly conservative estimate) we can still find improvements to our algorithms to make our hypothetical artificial intelligences better. In nature, brain mass doesn't equate with capability. (This last sentence is anecdotal evidence, but I won't bother to dig up the papers to see whether it holds).
You cite the sorry state of machine learning/artificial intelligence to show that we aren't anywhere near what is required to implement strong AI. I agree that the fields of cognitive science and artificial intelligence are in a sorry state that barely make any effort at copying what biological brains do. But didn't you just warn against extrapolating future change from history? It is obvious to any young, ambitious scientist that our parents have been banging away at the wrong problems for the last 30 years.
This isn't a pro-singularity post, because I believe that what Ray Kurzweil and his cronies preach carries too much resemblance to a cult. They may or may not have ulterior motives, but as you say, things always look better on paper than they do in the real world. What I am trying to say is that is would be incredibly stupid not to investigate these things closer (not to _try_, throw our full weight on the problem), because the implications of successfully pulling this off would be profound.
If you believe the estimates for the computing power of the human brain, we only need a few more years of growth in the dollar/flops ratio to make the required computational horsepower available to smaller organizations. In fact, supercomputers being designed today already touch the lowest estimates of brain computational capacity. So if these estimates hold, we only lack the knowledge of how to copy this process artificially. Which certainly doesn't mean that it is right around the corner, but not looking into it would be stupid.
Then, of course, the regular pro-singularity lament goes like this: Once we have created one machine that faithfully copies what the human brain does best, the exponential growth in processor capability will ensure that we soon have an army of intelligent machines. This sentiment is easy to attack. But even if processors stop getting better overnight (which is a grossly conservative estimate) we can still find improvements to our algorithms to make our hypothetical artificial intelligences better. In nature, brain mass doesn't equate with capability. (This last sentence is anecdotal evidence, but I won't bother to dig up the papers to see whether it holds).
You cite the sorry state of machine learning/artificial intelligence to show that we aren't anywhere near what is required to implement strong AI. I agree that the fields of cognitive science and artificial intelligence are in a sorry state that barely make any effort at copying what biological brains do. But didn't you just warn against extrapolating future change from history? It is obvious to any young, ambitious scientist that our parents have been banging away at the wrong problems for the last 30 years.
This isn't a pro-singularity post, because I believe that what Ray Kurzweil and his cronies preach carries too much resemblance to a cult. They may or may not have ulterior motives, but as you say, things always look better on paper than they do in the real world. What I am trying to say is that is would be incredibly stupid not to investigate these things closer (not to _try_, throw our full weight on the problem), because the implications of successfully pulling this off would be profound.