As a web user, I'm not grateful. I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better. All websites look slightly better, but the client side dies a little bit every time. In the web circa '96, anyone could write a web spider or web browser, and everyone did. There was tremendous innovation both client-side and server-side. Client-side innovations included things like search engines and Google. With the hyper-AJAXed world, client-side innovation becomes impossible. I don't think this is a net win for the world.
I echo the comments others have made about the wide range of improvements offered by modern browsers. But even putting that aside, the rendering capabilities you refer to make a bigger difference to web users than you might think.
You say "I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better." But--no disrespect--you probably only feel that way because you're already benefitting from the huge amounts of effort invested in supporting multiple, incompatible browsers. The web looks OK for you now because, without you noticing it, web developers have slaved to make it look OK for your unique combination of OS and browser.
It may seem like that's just a cost we web developers have to bear, with little effect on you. But that's not so. The fact that we have to spend time supporting old browsers increases the cost of everything that's created on the web. And when innovation is more expensive, it happens more slowly. The costs imposed on our industry by older browsers do effect ordinary web users, because those costs translate into a slower pace of innovation.
> I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better.
If that's all it was, then you'd be right. You can't do geolocation, local storage, audio/video, canvas/svg/webgl, etc. etc. on old browsers. It's not about making things pretty. It's about creating software that can compete with desktop alternatives.
Unspoken assumption: that competing with desktop alternatives is good or desired.
Personally, I can't think of a single desktop application that I use that is better done in a browser with the possible exception of Google Maps, and I say "possible" because I haven't seen a desktop contender.
I use GMail's web interface solely because I can't stand any of the Mac clients (at work) and Outlook isn't available on Linux (which a couple of my PCs at home run). It is cross-platform, which is a plus; it is painfully ugly and slower than a desktop alternative, which is an overwhelming minus.
I greatly prefer my web browser to be for browsing the web. While I'm all for standards in HTML/CSS/JS, I find the "web browser as your OS!" crap to be disheartening. I like things that work, and for the most part, web applications don't.
I share you views. Making a web app is easier for the developer: just one platform to support, no restrictions on implementation, etc. But a desktop app is nicer for the users (the ones we're supposed to care about, right?). In the very best case a web app would be able to function as good as a desktop app. The best case is usually not achieved.
Imagine if a person at your job said they had just made a new client app for your business users workflow. When you ask how it's implemented the person says that the GUI itself is just a skin that reads everything from the database. The GUI layout, how the buttons behave, all of it is stored in the database and the actual GUI is nothing more than a kind of platform for what's in the database. I've actually seen this done and the team who did it were sacked, their application deprecated. As far as I know it's still running because the team in charge of replacing it still doesn't completely understand it. But this is what web apps are. Model, view, presenter, they're all stored in the same place.
Personally, I prefer having a back end RESTful server with native... shall we say "fit" clients (not fat but not thin either) using it. The browser gets a simplified, default version of the app which has a link to the appropriate native client somewhere visible.
Time = $$$. Universal constant. While your desktop app may be shinier, it takes much more time to create and maintain. This means fewer apps are published and fewer features are added.
Less choices and fewer features translate to a negative for the user.
On top of that, native desktop apps are compiled ... the web is open, with html/css/javascript/etc. and this creates an open environment that encourages free software.
Free is always good for the user.
Anyway, re-arguing this sort of thing is pointless. The desktop is in a death spiral. All this stuff has already been set in stone. It's a question of when, not if.
15 years ago, I could reasonably write a search engine. Myself. 1 person. In a few weeks (modulo bandwidth and server farm). I write a program that grabs a web page, and reads out keywords. Today, if I grab a web page, quite often, that web page has nothing except for JavaScript code. That code grabs the actual content from the server, lays it out, and animates it. To write a web search engine, I need to write a complete JavaScript library.
At the time, we were talking about developing all sorts of agents. Things that would shop for you. Things that would find parts for you. Thinks that would remember what web sites you visited, and let you search them. Things that would track where in a long set of pages you were (blog, comic, etc.), and let you keep reading from there. It happened for a while, and then it died when the web became too damn hard. Writing anything that can reasonably see and parse web pages now takes many, many web years. There are only four or five organizations with that kind of resources (WebKit, Mozilla, Opera, IE, and internally, Google). There are countless things we just didn't even imagine.
It's like the DMCA. You notice all the innovations that happen, but you miss all the innovations it made impossible.
>15 years ago, I could reasonably write a search engine.
No, 15 years ago you could reasonably write a search engine for 15 years ago. It would suck by today's standards.
You want to handle Javascript? Easy! There are plenty of tools to choose from now. Run a browser as your crawler, visit the sites, and read the generated source instead of the static source. Shove that into your 15-years-ago search engine, and there's no difference.
>Things that would track where in a long set of pages you were
You mean bookmarks? Add a scroll %, assuming they're not nice enough to use anchor tags / IDs meaningfully, and you're golden.
>Writing anything that can reasonably see and parse web pages...
has become a community effort, instead of a bunch of isolated silos where people reinvented the wheel out of necessity.
The resources required aren't so large just because it's so much more complex, it's large because it's so much faster, and you won't survive if you can't compete. How long did we languish with crappy Javascript engines? How much would you need to know to actively compete in that section alone now? It's easy to make a slow-but-functional browser, and if you looked around you'd see some people doing just that. Making a fast-and-resilient one is as hard as making a fast-and-resilient anything, especially where human input (ie, HTML) is expected to be consumed.
> You mean bookmarks? Add a scroll %, assuming they're not nice enough to use anchor tags / IDs meaningfully, and you're golden.
Bookmarks in books work okay. You move them. Book marks in browsers don't. You have to remove the old one, add the new one, and the overall process is too cumbersome to be useful for the application I mentioned.
We actually built a site to solve that problem. If you have a series of pages (blog, comic, book, etc) and want to mark your place in them with a bookmark that moves as you read, try Serialist (https://serialist.net/).
As to the auto-updating bookmarks, would it resolve the issue if I made an extension to do that for you? I can see the use, honestly, and I like it. (seriously, I'm offering, and I'd probably use it myself. It'd be an interesting project. Even if it doesn't resolve the issue - we might just fundamentally disagree here, I'm OK with that.)
But why should that be part of the browser, when modern browsers allow you to do damn near anything by simply leveraging it? Why should we rely on browser makers to tell us what's possible, when we can do it ourselves, because of the changes in the past 15 years?
I'd love to see that extension. If you write it, I will use it. I use Chrome too, so it should work here.
As to what should and shouldn't be part of the browser -- the way to figure that out is experimentation and competition. When you make technologies and standards simple and easy, people will make independent implementations and try things. The vast majority will be dumb, but some (often unanticipated ones) will turn out to be useful, clever, or brilliant. That's how the technology improves.
When you make standards big and cumbersome, progress stops.
If you want to move a bookmark to a different place on a blog / content site, it is probably because you want to read new entries. RSS does this fairly well.
If you want to read through a site's archives, what I do is keep it open in a tab. It is restored when I reopen my browser, saved if I reboot, etc. It's not as handy as a bookmark, but it comes close.
With all the headless Webkit tools coming out nowadays (and all the free and fast JS engines like V8), writing a spider that runs a JS engine and clicks on all kinds of non-<a> elements is not beyond the reach of somebody innovative and motivated enough to create new kinds of spidering robots.
You won't need to write a complete JavaScript library. Look at all the testing suites that automate browser instances, Selenium being the most well-known.
15 years ago the thing we call "web application" hardly existed. If web page "has nothing except JavaScript" (e.g. GMail) is probably is web app and indexing it makes little sense anyway. If someone misuses JS on content site, that's another story.
And your comment about innovation makes no sense at all. Capabilities of modern browsers (Canvas, geolocation, local storage, offline apps, etc.) offer more opportunities for innovation than "old web" could even imagine.
I think you (and most people here) underestimate what the "old web" could imagine, though. We had all sort of ideas for agents that would go out and grab and analyze data for us in all sorts of clever and interesting ways. Search engines got built, as did one or two other things, and then the web just got too complex.
Hell, even I had a simple app that went out and grabbed all my favorite comics and showed them to me, nicely formatted, and without ads.
You mean ad filtered RSS/Atom? I assume such a program would be much faster to write these days: have a set of newsfeeds, map() them with a filter function and merge the results.
While the web gets more complex, the tools at hand get better. Much better.
Gmail's HTML view works fine in Links. That team has been showing competence and diligence that's increasingly rare, and I wish people wouldn't tar them with the same brush as the clowns who write js-only crap.
At the time, we were talking about developing all sorts of agents. Things that would shop for you. Things that would find parts for you. Thinks that would remember what web sites you visited, and let you search them. Things that would track where in a long set of pages you were (blog, comic, etc.), and let you keep reading from there.
The drive toward semantic markup in HTML5 is supposed to help the web get back to those original ideals. Over time, we'll increasingly expect web developers to conform to a subset of possible HTML arrangements, much like book publishers conform to a subset of the possible random arrangements and orientations of letters on a page (odd poetry excepted).
Most people would gladly make it harder for a single person to write a search engine if, in return, it makes it easier for them to make good web pages and web apps.
Your premise that the web is somehow less effective because you can't scrape data from pages easily doesn't make much sense to me.
Have you taken a look recently at the plethora of web APIs for just about every purpose? The modern way of collecting machine-friendly data from a server is through APIs and semantic content (RDFa, microformats, etc.).
Not through HTML / CSS / Javascript formatted pages which are made primarily for human consumption.
I would blame poor/lazy devs inappropriately using JS rather than the evolution of the browser for this. For the average web page, it's unnecessary 90% of the time to require JavaScript for any core functionality ( not so much with web applications ). I have a hard time understanding why people do this as it's often much easier to test and develop when you're layering on JS unobtrusively.
Agreed that it's nearly impossible to generally parse web pages now, though if you're screen scraping it's still pretty easy (if not easier than before) to pull out data. Before you had to parse the DOM; now you can often get structured data via JSON APIs. It's more brittle, though.
I think he's saying that it makes scraping harder.
But today JS frameworks like jQuery give us the means to do anything we want javascript-related, in any browser that half-supports javascript. By deprecating IE7 they're just saying they're going to drop all of the extra hacks they had to use to keep IE7 working.
A lot of what newer browsers give us is just better rendering. You can replace a mess of tables and nested divs with things like border-radius, which means less client-side html to wade through.
Are you kidding? The semantic web and using more metadata is making it easier than ever. Nowadays in many cases not only you have the content, as it is tagged with microformats or RDFa.
Try looking at Freebase or DBpedia and tell me where did you have such a huge amount of easily parsable, semantic content in the 90s.
More and more content is being taken entirely off the open web and siloed behind a server that talks an unstable proprietary protocol, with exactly one blob of javascript in existence that knows how to tunnel requests over HTTP to access shreds of that content and cram them into an utterly non-semantic DOM. We are hurtling backwards into the client-server hell the web had saved us from.
Yeah, I don't see that. I see more and more accessible APIs[1] and pages having more and more an incentive to being semantic due to search engines now reading that data (hRecipe, for example).
Service architecture have also been moving from stuff like SOAP to REST, which is definitively more open and accessible.
And even Ajax-ladden webpages are still just a Firebug Network tab away since they all run over HTTP, and then you have a nicely structured data format instead of having to deal with messy HTML pages.
A JSON (or SOAP) backend is only usable by third parties if its API is kept stable. There are far too many devs who redesign their backend request and response formats at the drop of a hat because they think their js client is the only one that matters (a self-fulfilling prophesy) and they can replace it simultaneously. And their responses tend to look like "here's some more markup to stuff into an arbitrary location in the DOM we're using today", not semantically structured (e.g., Rails now has this built into JavaScriptGenerator). A given site can be reverse-engineered, but anything built on that is going to be fragile and short-lived, much more so than when the typical visual rendering desired for a page determined its structure.
I don't see how that's worse than unstable, non semantically structured HTML markup of yore. In the worst cases, we're not really worse, and we have much more semantic content nowadays.
With respect, I think this is the same innovation that occurs in every industry and it's silly to bemoan the increasing complexity of the web.
In transportation:
It used to be that everyone could buy a horse and build a buggy and get around. Then cars came along and it got a lot more complicated and expensive to build a vehicle that was state-of-the-art, but tinkerers could still do it.
Now there are only a few big players who are capable of innovating and building the best and newest vehicles.
Now that I think about it though, there is still space for tinkerers and inventors in the automobile space. But You can't expect those automobiles to compete with those made by, e.g., Toyota.
In the same way, it's still possible to write a spider without a javascript renderer. It just won't be able to compete with Google.
One last point: the state of the web is based on the collective decisions of all internet users. Ultimately, people building things on the web decided more often than not that ajax-ifying things benefited their users.
If users had wanted a web client that would spider the web and shop for them, they would have latched onto it during the time of great innovation that you think is now gone. But they didn't. The things that users wanted are the things we see today, assuming that there isn't some horrible inefficiency in the feedback loop between web-builders and their users.
> I don't think having more pixel-perfect control or slightly faster JavaScript makes the web any better.
I think the point for making ie7 deprecated legacy is more about its hideous bugs in the parsing, internal document representation and rendering bugs.
It is the things that makes proper code unable to be displayed properly without the tedious work of understanding the dysfunctions to circumvent them properly. To me, the old IE rendering engines alone has slowed web innovation for at least several years all by themselves.
The Google Spider grabs pages from other servers. That's a web client. It's a web client that was easy to write when Google was started, but is almost impossible to write today. If search engines hadn't been invented 20 years ago, they'd be impossible to invent today. The only reason they still work is tremendous work on Google's end to have its spider be able to spider complex AJAXy pages, and that content creators engage in SEO and develop to Google.
It is not harder or easier. It is just different than it used to be. Things that used to be hard are easy now. Problems that didn't exist 10 years ago exist today. I develop a spider.
For the most part, the bulk of the web's content is as easily accessible as it was years ago. You make a request and you get a blob of HTML back. If you have special requirements and need to get into all the nooks and crannies you create a DOM implementation and embed a JavaScript engine. Then you parse the page into a DOM and start firing off events. There are quality open source JavaScript engines available. JavaScript and AJAX are a breeze.
Flash is a different story. If you have any requirement to follow links or process content in a Flash movie (you'd be surprised how many sites still have Flash nav) you pretty much have to write your own runtime. Unless you are big enough to have Adobe do it for you.
Depending on what you are doing with the data that your spider collects, chances are writing a spider is far easier than writing a browser. There are at least 4 widely used browser engines and plenty more toy browsers floating around.
I can guarantee that writing a spider that can deal with AJAX is not the biggest challenge of developing a search engine. Scaling it, fighting SPAM, understanding the content, indexing and then being able to provide quick lookups are much, much harder.
I recently looked through the Google developer guidelines and they still recommend not changing the page contents significantly using Javascript. Also, the !# in modern AJAX apps is there to avoid having to run the Javascript in order to crawl the content. Do Google actually do that much with the Javascript on a page even today?
I don't really quite get what point you are making here. Are you saying that the innovation today owes more to the innovation of the mid-nineties? Are you saying that any innovation now with respect to AJAX and JS does not make your life better in any way?
Seemingly you argument could be made for cars "15 years ago cars were easy to fix and understand. Now they are not, so wake me up when they are like the cars of the mid-nineties."
JS/Ajax help programmers tremendously. Helps speed. Helps functionality.
To view these things through the lense of "I can't write a crawler for them" is a pretty limited view of what today's technology offers.
I'm glad. Thank you for taking the time to read and understand. Hacker News is starting to go down the decline that hit reddit 2 years ago, where people don't bother to try to understand different viewpoints, and just downvote anything they don't agree with. It's nice to see good people still on here...
With the hyper-AJAXed world, client-side innovation becomes impossible.
Are you serious? Have you heard of the canvas element? It allows modern browsers to do things that were only possible 2 years ago in Flash. Have you noticed how there are actual web applications, not just a collection of linked pages these days? Have you noticed how with ubiquitous JavaScript, the usability and ease of websites has improved greatly?
No. I haven't. I've noticed maybe 1 or 2 web apps I want to use (Google Docs and Google Maps). Beyond that, I don't see anything that couldn't be delivered more effectively without JavaScript that I want or need.
Usability is not up. Each web site has its own, custom, non-standard user interface. I could teach my mom to use the web circa '96. I cannot teach her to use it today. It's too damn complex.
Usability would be up if the browser knew more about what to expect. You can look at things like Readability. The browser ought to know more about the content, and be able to present it in a coherent, usable way. The server-side shouldn't dictate presentation.
A big part of usability is not sitting around waiting for entire pages to reload every time you interact with them. AJAX has done great things for users by minimizing this delay. You wouldn't like Google Maps as much if you had to click an arrow and wait for a page refresh for the map to move, like MapQuest circa 2003.
Yes, the proliferation of web apps has created a diversity of user interface paradigms. Some would say this is a good thing, however, since the web has spurred all kinds of new UI philosophies, and the fact that JavaScript and HTML isn't compiled allows people to examine and re-work others' code, so good ideas spread very quickly. I for one don't intend on waiting for the HTML5 group to invent every new <input type=""> that I could conceivably need, and then wait some more for browser vendors to implement them all consistently. With JavaScript, you can currently build and deploy just about any kind of 2D client-side interaction imaginable.
In short, the vast majority of users on the internet probably have a different idea of usability than yours, and the numbers tell the rest of that story. You only need to look at the gross casserole of UI paradigms within the applications installed on your mom's PC to see how much users really care about UI standardization.
Did you read what I wrote? I mentioned Google Maps as one of the two places I found AJAX useful.
The applications on my mom's PC do have much better UI standardization than the web does. Microsoft releases UI guidelines. alt-f4 does the same thing in every application I've used, and the menu structure is roughly the same too. Apple is even better.
> Did you read what I wrote? I mentioned Google Maps as one of the two places I found AJAX useful.
Yes, and I was dissecting why you may have found it useful, because the same principle applies to hundreds of other situations that you may not have recognized.
> The applications on my mom's PC do have much better UI standardization than the web does. Microsoft releases UI guidelines. alt-f4 does the same thing in every application I've used
Questionable. About the only key shortcuts you can rely on are the ones that will work in your browser too. Alt-F4 will close your browser--that's what you wanted, right? Cut/copy/paste, print, etc. all work there as well...
> the menu structure is roughly the same too
Ha, you mean the invisible menus on Explorer and IE>8, the mega "office button" menu in Office 2007, the delightfully inconsistent menu bars in WMP>9...
Microsoft and UI guidelines in the same sentence, something doesn't compile - I would be happy if they used their own guidelines though.
the menu structure is roughly the same too
Too bad it is getting reinvented; it happened in Office 2010 and it will happen again as people are getting tired of File -> Save; and yet again when touch screens on laptops will become the norm.
So I'm sorry for your mom, but unless she never upgrades, then she's going to have to learn new things.
I do see a side of what you're saying. In those older days the web was a much simpler platform so figuring out what to do and what to click was easy. This was true in Windows too as MFC was the library of choice for UI meaning a lot of the software was easy to figure out also.
Today a lot more software and websites have broken the mold and come up with some really different(not siding better or worse because both exist out there) UX patterns. There aren't standards for web UI anymore that are practiced across the board.
I do disagree with your statement that the client should dictate how a site is presented, not the server. The browser should display the content in a standards compliant way. The days of buttons looking like windows buttons in IE and Mac buttons in Safari should be a thing of the past never to return.