Updated: 1/17/2002
Java is probably the most-hyped programming language ever, and deserves almost none of its fame. All of it's promises can be fulfilled by existing languages or moved to standardized protocol design instead of a language. There is nothing really new that Java brings to the table. Sun is much better at marketing than at creating new languages. HP and IBM products are often technically superior or better deals, but those companies have not learned how to market the way Sun and Microsoft have.
Perhaps Java's only real claim to fame is being the first language designed for web applet usage; but even that niche is being filled by Flash players and others because Java didn't do applets very well. Now it is trying to remake itself as a server language. However, it is too slow and too unpredictable (GC) for systems work and too strong-typed for application database work. Java is in for a long struggle.
Just like the sinking dot-coms, Java is going to have to struggle on it's own merits (if it has any) when the hype runs out. Hype can knock you down just as fast as it bring you up. Hype is a two-edged sword.
Dumb dumb Dumb dumb:
Thingy thingy = new Thingy();Why does the Unix crowd hold on to case-sensitive tokens? It should have gone out with vacuum tubes. CPU speed? Come on! Make the machine be the slave, not the human programmers. How much speed does it buy? A 0.000001 increase?
print("Hello world"); Or "Hello world".print()Bloated:
System.out.println("Hello world");The Java approach is a "Law of Demeter" sin. It hard-wires the "path" of the resource into the application code.
"The main reason to heed the rules in the previous section is that
they make your code easier to maintain, because all the changes
that typically need to be done to fix a problem or add a feature
tend to be concentrated in one place."
Changes come in many different forms. Optimizing a grouping for one type of change often disadvantages a different kind of change. OO often emphasizes "noun-oriented" changes at the expense of "verb-oriented" changes. It is no free lunch and verb-orientation is just as important an aspect as noun-centricity in my opinion. See Shapes Example and Aspects for more on this.
"If anything, good object-oriented systems are more complex than
procedural ones, but in such systems the program is better organized
and thus easier to maintain."
While I personally agree with the first part of this sentence, the second remains largely unproven.
An object-oriented solution tries to encapsulate those
things that are likely to change in such a way that a
change to one part of the program won't impact the
rest of the program at all. For example, an object-oriented
solution to the problems I just discussed requires a Name class,
objects of which know how to both display and initialize
themselves. You would display the name by saying "display
yourself over there," passing in a Graphics object, or
perhaps a Container to which the name could drop in a
JPanel that displayed the name. You would create a UI
for a name by telling an empty Name object to "initialize
yourself using this piece of this window." The Name object
might choose to create a TextField for this purpose, but
that's its business. You, as a programmer, simply don't
care how the name goes about initializing itself, as
long as it is initialized. (The implementation might
not create a UI at all -- it might get the initial
value by getting the required information from a database
or from across a network.)"
"Consider a system designed to get names from users. You might
be tempted to use a TextField from which you extract a String,
but that just won't work in a robust application. What if the
system needs to run in China? (Unicode comes nowhere near
representing all the idiographs that comprise written Chinese.)
What if a user wants to enter a name using a pen (or speech
recognition) rather than a keyboard? What if the database
you're using to store the names can't store Unicode? What
if you need to change the program a year from now to add
employee IDs everywhere names are entered or displayed?
In a procedural system, the solutions you come up with
to answer these questions usually highlight enormous
maintenance problems inherent to these systems. There's
just no easy way to solve even the smallest problem, and
a vast effort is often required to make simple changes.
First of all, there is no law that says procedural
frameworks cannot have a "name" subroutine and/or
type. I can envision a NamePrompt(x) routine that
can also "hide" all the stuff listed. Hiding details
is what subroutines are all about.
However, trying to engineer for every pre-conceived change into a system before the requirements even happen is often not wise. For one, to prepare for all of the potential options, such a component may require a large interface. As programmers come and go (or forget), they will have to slosh through this interface to know what is relevant and what to ignore for the moment.
"In the simple example above, you're tasked with adding
an employee ID to every name in every screen that displays
employee names. In the RAD-style architecture, you'll have
to modify every one of these screens by hand, modifying or
adding widgets to accommodate the new ID field."
What the author is proposing is being able to add this
change to one place and have the new ID field automatically
propagate to all screens which display the name.
It is an intellectually interesting idea, until
you think a little deeper.
This is frankly a bad idea! What if you don't want ID to appear on every screen? For example, on some screens there might not be enough room without manual re-arrangment of fields (automated esthetic adjustment is not something computers currently do very well). Or, perhaps some viewers are not to have access to see Employee ID. For example, external (B-to-B or B-to-C) clients may be limited to just seeing a name. The author is risking the exposure of internal information to the outside world.
Such "blunt propagation" of changes could be done just as easily with a procedural approach (assuming a well-designed GUI protocol), but I won't risk endorsing the idea by describing how.
"...looking at a widget that I use all the time: a wrapper around a Collection capable of creating a visual proxy that automatically changes its appearance. By examining the amount of screen real estate available to it, the widget displays the Collection as a combo box, a list, or a button that pops up a frame containing the list when pressed. This sort of dynamic adaptability is essential when implementing user interfaces for object-oriented systems, since the context in which a particular attribute will be displayed is often unknown at compile time." [Emphasis added]Imagine how a user would feel if one time they come to a screen to be faced with a pull-down list, and then later a button that says "see list...." after selecting a few different options or shuffling screens around a bit. Consistency is an important trait in user interface design. Lack of consistency tends to confuse and/or distract the user. The user may pause to think, "Hmmm, why did that pull-down list change into a button? Am I in the right screen?" One should not make the screen keep morphing around unless it is the only practical option left. I would have to look at a specific case to weigh and present the tradeoffs involved. However, morphing widgets and layouts should be a last resort only.