Hacker Newsnew | past | comments | ask | show | jobs | submit | shabby's commentslogin

So, why won't a big restaurant chain try out order-at-table? 6% net margins. Small changes in restaurants can result in big changes to the bottom line, either good or bad. And, changing the customer experience is considered a big change. A huge pilot project for a 500 unit restaurant is to try out slight modifications to its menu. Trying an order-at-table concept is orders of magnitude greater in potential change. Several large chains told me they'd be very interested if they saw the concept working somewhere else first.

So, the obvious conclusion is to try it out at a smaller restaurant chain. The problem there is that a pilot is considered a huge distraction that takes away from management's resources. Also, it's a huge risk when you are piloting in one unit of a ten unit chain. And, the small chain has no pull with its POS vendor. So, you can't do an integration.


The first problem is that existing restaurants have POS systems that are (1) expensive, and (2) essential for operations. If you want to build an order-at-the-table system that reduces workload, you have to integrate with the POS system or replace it. Replacement is hard because these systems are actually somewhat complicated, and restaurants are scared of replacing their proven system with your unproven one.

Integration isn't hard from a technology standpoint, but you have to get the cooperation of the POS vendor. 80%+ of casual dining restaurants use either MICROS or Aloha (now owned by Radiant Systems). Neither have open APIs or are friendly to potential competitors. Radiant Systems has a kiosk-based front end used by gas stations for sandwich services (Sheetz if you're in the Midwest).

Permission is hard to obtain unless you have a large restaurant chain putting pressure on their vendor. Integration without permission is problematic because the POS system becomes unsupported (the answer to all problems will be "remove the tablet system").

So, the apparent solution is "Find a large restaurant to partner with." More about why that's hard in a minute.


I spent a year of my life shopping a business plan for the tablets-at-the-table concept. I made a prototype system and tested it extensively at a restaurant. Guests loved it. Men came up with all kinds of efficiency-based reasons why they liked it (more accurate, faster, etc.) Literally 75% of women trying it (30 out of 40 or so) used the word "fun" and felt no need to justify further. And, customers perceive that a tip is warranted because food was brought to the table, so they tip anyway.

Restaurant owners/managers like the concept, as long as their customers do. The recognize that a single bad customer experience poisons the well. Not only is that customer unlikely to come back, but they'll tell all their friends. A spectacular experience does not have an equally positive result. So, given the option of customer experiences across a spectrum or consistently average experiences, they'll choose average and be pleased. The perception among owners was that few people could

Management complains that they can't get their servers to upsell/cross-sell/etc. One would think that the improvement in check average and thus tips would be sufficient motivation. However, people with good selling skills can make a lot more money selling other stuff. So, owners loved a system that could ask, "Would you like fries with that?"

I have to do some work now, but I'll post later today about why this went nowhere for me. There are some fundamental problems that are hard to circumvent.


And the memory footprint thing....

I understand the concern for mobile/embedded stuff and don't know much about how well Java does there.

However, when a half gig of RAM for a server costs well under $100, it just doesn't matter much for regular 'ol server apps. Even a "bloated" J2EE app doesn't usually require more than a gig of heap (unless you're doing massive, in-memory caching). Yeah, it's nice to use less memory, but there are 50 other things I'd give equal weight to (the availability of libraries being about 25 of those 50).

As I write this comment, I am tuning a Java rules (rete) app that is using an 11 gig heap :)


I have a similar technique for phone screens. Candidates are told to do the interview where they can receive email. To avoid cheating, we send them a couple of code snippets right as we call them. Then, we talk about the code.

We found that phone screeners have a bias toward those who speak well, so we often brought in terrible programmers for a half day of interviews. What a waste, especially when flying people in. The in-person failures we experienced were of the FizzBuzz variety, not the "I don't exactly under stand concurrency" sort.

So, we started asking people to read some code and comment on it. Really basic Java stuff: Will this code compile? Trace the flow of execution through this method if the BlahException was thrown. String comparisons with ==. The idea was to see if (1) the person was at all clueful, and (2) if they had actually written Java code.

As the linked blogger noted, the disparity in responses is shocking. We found that about half the people we interviewed thought that execution of a Java method ceased once a catch block had been entered. Half! How could they ever have written robust production code? These were often people with years of Java experience on their resumes. I just don't understand how someone could write a bunch of Java code and not understand the basics of how exceptions work.

Some other parts of the code review were more nuanced. Most candidates knew that one should not compare strings using double equals, but they usually couldn't explain why. Nor did they understand why == sometimes seems to work when comparing a referenced String object to a literal.

We were happy to hire smart candidates that hadn't done a lot of Java work, but we wanted to avoid people who had supposedly spent years with a language and didn't even comprehend the basics.


I work in a Java shop but do some statistical work using R. For me, Java is the easiest way to gather together data for an analysis. R is the clear winner over Java for doing statistical modeling. Gluing them together is possible, and some people have written bindings. However, these are not well documented, and they are not mature enough to use in a production environment. The difference when mixing together use of (J/X)* languages is that they have mature bindings (often simply because they are all written in Java).

I'm all for using the right tool for the job, but it's really hard to take on the risk of an integration one might have to maintain or expand upon. Using two languages isn't itself a problem, either for developers or managers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: