I recently wrote about “Why Java is stupid”; the responses were interesting, found both here and on other aggregating blog sites. I intentionally was a bit uncensored in my post, and it became obvious that the point was being missed.
I’m going to backtrack, and try explaining myself again, though this time with more directed arguments and reasons. I’m also going to break it up into a few parts (at least 2), so that it’s easy to understand the isolated concepts in each part.
In addition, I will completely leave out the mention of any other language (except for simple references to their existences) so that readers may avoid misunderstanding the point being made.
Part 1: Time Will Tell
Over time, it becomes clear why certain technologies were good or bad; with time, we begin to see the mistakes we made early on. When the technology first debuts, we use it because its new. Then, as time goes on, we begin to see where improvements could have been made.
A few examples:
- Laptops, like the Epson HX-20 mothership herself, or maybe the first Compaq portable shown here and here.
- Zip disks: a victim of timing and rapidly advancing technologies (of which technologies the Zip disk was not a part).
- Cell phones: Clearly the technology has developed since the days of the “brick” phones, which unfortunately is only recently in our past.
So clearly good ideas were present in each of those examples, but the technologies are now radically different than when they first came out.
What changed? Sure, we made the parts smaller. Of course, we changed the casings to look more sleek.
But what REALLY happened? Here’s what changed: Laptops now have dual core processors. You can buy 16gb flash SD cards for under $50, and solid-state harddrives are finally pushing their way into the market. We’ve got cell phones that double as music players, and triple as GPS systems, and are becoming development platforms, as with the Apple iPhone, and Google’s Andriod-powered phone. For crap’s sake, we have OPEN SOURCE *phones*, and we read our email on them.
Its obvious that the core functionality of those technologies has been radically redone. We didn’t just slap a new face on cell phones in order to get them to where they are now. The fundamentals were rethought.
However, some things just simply seem to lag a little bit behind in our vast Information Age, where the ideas backing the technology seem to be slightly out-dated.
To be fair, those technologies (dusty or not) will still have their uses. The Verilog programming language IDE— holy heck that’s a hard buggy thing to use. It would benefit from a makeover. But we don’t do that because too many things rely on the situation as it currently is. Changing it would probably only create more troubles than its worth, one might argue.
But here’s an example of a technology that actually holds a massively large part of our programming playing field: Java. Java is far more prestigious than Verilog (thank goodness), yet it seems to be suffering from its failure to imp lament more modern programming ideals.
Java, by nature, is remarkably similar to C++, given some basic syntactical differences. It was designed to be C++++, if you will; conceptually it didn’t stray far from what the world knew and loved (that’d be C), yet had the advantage of executing itself on any platform for which a JVM was created. It was a marvelous step forward into the fairly unknown territory of abstraction of programming into layers. It took away the nastiness of having to manage your own dynamic memory, with “destructors” and such.
And Java became popular.
And this comes at no complaint from me, I assure you. The problem arises, however, when 6 or 7 years go by, and we realize that there are better conceptual models. Staying “fair”, various models have their various specialized usages.
- Perl – made for rapid development of quick’n’dirty tasks. Programs of the target size wouldn’t benefit from being compiled anyway, so this scripting language made sense. It’s problem is said to be that it is “write-only” code.
- PHP – designed for amazingly flexible web developement, where users could escape CGI and actually run a markup preprocessor at good speeds. It specifically avoids certain annoying cache issues. It’s problem seems to be that it was *not* designed initially to be an object-oriented language.
- C# – The ‘C’ alternative to Java. It’s conceptually the same idea as Java, though its syntax kind of strays from what the world has been used to. Boasts faster speeds than Java, running at “almost” (whatever that means) the same speed as native C code.
And the list could go on. Each has its use. Each has its flaws or weaknesses. Java is not exempt.
The concept of the JVM was miraculous. That’s the sort of thinking that progresses the information age. But as we explore new technologies, coding in Java for small simple tasks is far from the best option. You’re much better off just tagging a lighter-weight scripting language to do your task. There’s almost no sense going through the burdensome process of defining separate files for just a few simple class objects, and then one more class object (and file) for just your static utility methods, and then possibly another class/file to server as your program’s entry point. That’s silly. Just script it and get the task done.
The “Just script it” method is great, because it removes the needless, never-changing parts of writing in C or Java– you just worry about the task, and the scripting language alternative will worry about getting it done. This method often suffers a bit in speed, but we’re no strangers to that trade-off.
Take this example: Computers work on “synchronized” designs, where everything is controlled by one common “clock” signal. It ensures that everything is coordinated, so that if one part of the computer finishes its task before another part, you avoid “race conditions” where there’s no absolute guarantee as to which of the computer’s tasks it will finish first. If the computer can’t predict which will be first, then you’re bound to have a very very unstable computer. The cost of having a synchronous design is that we suffer from a little bit of slowdown. However, the opposing “asynchronous” idea is mass chaos when aimed at a design of the complexity of a full desktop computer. If tediously designed, yes, it may be possible to create an asynchronously clocked personal computer, but would the effort be worth it?
So that’s what it comes down to: Is all that extra effort worth it? Lose a little speed, gain productivity. The argument is that there is a diminishing return on a function f(x)=y, where ‘x’ is the time put in, and ‘y’ is the results. Eventually you get to the point where a huge amount of effort is required, just to get a small more amount of ‘better’ results.
Java is a strange case, because it’s interpreted, yet it’s verbose.
It can be fast, but the methods that actually do the dirty “fast” work are hard to read and are packaged up into shadowed utility packages in the dark corners of the language, and you’ll be scolded by Java junkies about how to “do it better”, which only proves that you have to be a junkie to know about the good fast utilities.
It’s big JVM is possibly a little *too* big, and it very quickly rules Java out of module-based web programming (where a server like Apache just calls a Java process to run your Java code). So instead, we’ve developed “solutions” specific to Java, like Tomcat, Glassfish, and a few other such things to accommodate the ragingly popular Java.
And here’s where we ask ourselves, in light of Java and its world-wide influence:
Is there a better way?
I submit that there is. More to follow in Part 2, yet to be written.