Hacker Newsnew | past | comments | ask | show | jobs | submit | lechuga's commentslogin

Minor nit re: Rails legacy. IMO a major reason it was so popular was because of Ruby. It's certainly not the right hammer for every nail but the syntax is one of the best to work with ever. Whenever I can find a project where writing a little Ruby makes sense I use it. Because I enjoy it :)

While it's useful to me and dominates what I code in atm, I don't necessarily like writing Go. Better than C/C++, sure, but I wouldn't go so far as to say I enjoy it.


Have you tried Crystal? I used to think I loved Ruby for its syntax, until I tried Crystal, then I found out that Ruby's dynamic typing and the sublime object model are just as important, if not more important, than the syntax.


greedo shot first


slightly over the top but awesome.


A somewhat more thorough asking of the same good question: http://www.reddit.com/r/Bitcoin/comments/36e8by/21dotco_a_bi...

I'd like to see this answered in some forum. It's very hard to get a more concrete sense of the vision without it being answered.


I don't know if it's even possible to answer such questions concretely. Everything hinges on transaction fees, and future fees depend on both the rate of transactions (which they propose to increase greatly) and the block size (which is being hotly debated right now). The case where their chip doesn't even mine enough satoshis to pay its own transaction fees isn't out of the question.


Travis sounds exactly like a Disney villain. I bet somehow dead puppies are also involved.


Disagree. In most cases you want TCP. Rarely do applications gracefully deal with loss.


Why you want TCP:

you care about the data, you want it to arrive in order, and you need it arrive. You want to be notified if there is a problem at the other end. You want the data sent to match exactly the data delivered.

File transfer, simple protocols, advanced critical protocols. Anything where the dev doesn't want to think about how to account for packet loss. Its already done for you in a standard interface.

Why you want UDP:

You want the latest packet of data, and you want it now. it doesn't matter if you lose a few packets. Telephony is the classic example. if you loose a frame its not the end of the world. Streaming video is another, Use FEC to cope with loss.

Tedious real world analogy:

When you send a letter or a parcel, you have two options: Send it recorded, and pay slightly more per go, but not have the worry about phoning up the recipient to see if they received it.

However if the letter is of little worth (like a flyer or something similar) then the notification and or guarantee that it has/will arrive is a pointless cost.


I'm intimately familiar with cases where you actually would prefer to use UDP. My point is most of the time these cases aren't what you're dealing with.

In addition to reliability, TCP provides congestion control which provides approximate fairness of bandwidth utilization per connection. Fairness here meaning given a limited pipe of size W each connection gets a W/N slice of the pipe's available bandwidth. I don't know that I'd like to live on an internet where every hipster made their own decision about how to handle congestion control, see also congestive collapse.


>every hipster made their own decision about how to handle congestion control

Unless they had access to a tier 1/2 network provider it'd be mostly harmless I suspect. They'd spam their local gateway. "Look I can send 300k packets a second over standard DSL"

But yes, TCP is generally a good thing.


> My point is most of the time these cases aren't what you're dealing with

Which is why the "default" protocol for most things is still TCP.


That's completely dependent on the application. SNMP over TCP would be a net loss because 1) it doesn't buy you anything as SNMP doesn't have any requirements that TCP meets above what UDP provides, 2) TCP would be exceedingly expensive, requiring connection buildup and teardown, ACKs, etc., and 3) it's way easier to implement UDP than TCP, so dumb devices basically need to only construct a single Ethernet frame to make it work.

Many, many applications deal much better with loss than they do with delayed retries. You don't want a VOIP app to perfectly recreate what the other part said 30 seconds ago, games usually only care about current state, and so on. In many of those cases, TCP is the wrong choice but a lot of people use it out of familiarity.

Remember, UDP came after TCP. One isn't "better" than the other and they serve different requirements.


Simple example: Player positions in a FPS.

Would you rather wait for a transmission of the missing position update 1 sec ago or just forget about it and continue processing the next one ?

In this context, you'll prefer the second option. It will probably cause a visual glitch but the game can go on, and players will say "lag" :)


Isn't that kind of use case what DCCP was designed for ?


Yes. You can show me a relative corner case. But in most cases you want TCP.


It's far from being a corner case. UDP is commonly used in multiplayer games to provide a channel for unreliable data.


If you want to go with that argument, why not go all the way and say that the common case is HTTP and the corner cases are everything else?

It still won't change the fact that application needs won't always match TCP exactly. And that's the whole point.


IP telephony and video streaming are hardly rare applications.


Pursue a career in medicine. That way you can be absolutely sure you're working hard for a reason.


Would you have advised people to use the network during the split?


That's completely fair. Consensus systems are very difficult to (re-)implement correctly but I would argue there is value in doing so. There is also great risk and we appreciate that.

This time we took a slightly different approach. Whenever possible, we directly ported the validation code from bitcoind. Granted, the script runner is the most fragile part and for that we are relying on bitcoin-ruby which does not directly port it from bitcoind.

While we are passing Matt Corallo's brilliant test suite, it is not currently recommended that anyone use Toshi in production for the purpose of managing bitcoin. And if you are going to run it in your environment we highly suggest running it behind trusted bitcoind nodes. There are many pieces still missing -- like all of the DoS penalty code. We currently have the demo running in the wild for the very purpose of giving helpful people the opportunity to break it.


> it is not currently recommended that anyone use Toshi in production for the purpose of managing bitcoin

Does Coinbase use Toshi in production?


The only instances of it we having running are the ones listed in the blog post (and just perhaps a Litecoin version that we're trying to get to sync.) They are in no way currently integrated with the web site.


Been working on this with Adrian for the last 3.5 months. Happy to answer any questions if I can.


It is great to have more examples of building a indexed fast query representations of the the current ledger state.

What did you guys go with as a policy for handling or avoiding orphan blocks...


Currently we're saving every block we see. And if we find an orphan's parent which is on the main chain we try to connect them both. We aren't doing anything to explicitly avoid them.

If you're referring to side chain blocks (which some people refer to as orphan blocks) we'll extend them to be the main chain should they win or leave them forever marked as side chain blocks. We aren't doing anything to explicitly avoid these either.


Why Ruby? (Nothing against it, just curious)


We're mostly a ruby shop here - so using what we know is more efficient and allows more engineers to contribute (both internally, and from an open source perspective).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: