Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For actual silicon, this does not seem like a relevant result. I think there are already time O(log n) multiplier circuits out there.

Edit: A typical imul will probably be no more than O(n), just to add something less speculative.



The big-oh notation is an asymptotic notation, so it is meaningless to describe an imul as being O(n). Given that an imul is doing 64x64 bit multiplications, it is almost a tautology to say it can be done in a constant times 64 ops/cycles.


That was not what I was trying to say, wrong as I may still be.

I meant to talk about nxn bit multiplication. If you scale n then, given the same basic architecture, you will also scale the circuit delay. When the delay scales linearly with the number of bits, I'd call that architecture O(n) in time. To me that seems to make intuitive sense, even though I might have that wrong. The term imul I used merely as a short hand for integer multiplication. I was not alluding to any specific architecture or width, there are plenty of CPU architectures out there using that mnemonic.


> I think there are already time O(log n) multiplier circuits out there.

This only can be magic! It's not possible!


There's always a time/space tradeoff involved, see for example the Wallace tree multiplier [1], which is indeed O(log n) in time.

[1] https://en.wikipedia.org/wiki/Wallace_tree




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: