#1176 Virtual Method design question

yachris Thu 12 Aug 2010

In the documentation, it says:

"Methods must be marked using the virtual keyword before they can be overridden by subclasses."

I'm curious what the point of this is. It's in other languages, and IMHO, it's the "Checked Exception" of the 2000's.

It seems to break one of the main points of having an object oriented language; you want objects to be polymorphic.

It also seems to break a lot of standard usage -- I can think of both the Adapter pattern, and the Decorator pattern, which exist for the following situations.

Someone (say, a framework writer) has created a class Foo. So you want to subclass Foo, override some behavior, and then be able to hand your subclass instance to ANY method that takes a Foo as an argument. Assume, of course, that you do not have the source for the framework.

Having this insistance on "virtual" seems to imply that the designer of the class needs to be able to predict all possible future usage of his class, or (I guess) mark every single method as virtual. Which then seems like a waste of a keyword.

The reason I mention checked exceptions is that Java had them, and an amazing number of people hated them (for a variety of reasons). To the point that, to the best of my knowledge, no language since then has had checked exceptions... it was seen as more trouble than the (documentation benefits) garnered.

So, I'm curious... where did this come from? I don't see anything that requires it; why is it seen as a good idea?

rfeldman Thu 12 Aug 2010

I tend to agree. I prefer Java's permissive alternative - mark it as "final" (or whatever) to prohibit overwriting, and allow it by default.

tactics Thu 12 Aug 2010

Having this insistance on "virtual" seems to imply that the designer of the class needs to be able to predict all possible future usage of his class

I think the reason for the decision is this point inverted. The superclass designer is able to dictate future usage happens in a controlled way.

Like all type checking issues, it's a trade-off of expressiveness versus safety. To override a method is to assume responsibility for that method at a higher level of abstraction.

In a virtual-by-default language, you could end up with nonsense by overriding the wrong method. Imagine a widget class where a user could override the repaint method. The repaint method is not intended to be overridden, (you should override the onPaint callback). In fact, repaint should never be overridden, and doing so will potentially break your code.

It also helps the users of an API to have the subclassing policy specified. If you look at the docs for Thread in Java, it's not as clear how to use the class. But looking at the Fantom docs, BAM, you see the virtual keyword and you go "oh, I can override that with something useful." It draws your attention to the points of inflection in a superclass, and gives you a laser-sharp idea of the intended use.

rfeldman Thu 12 Aug 2010

I think the reason for the decision is this point inverted. The superclass designer is able to dictate future usage happens in a controlled way.

This is still the case as long as there's a final equivalent - the question is whether the control should be opt-out or opt-in. (Expressiveness versus safety, as you said.) I don't think anyone believes developers should be unable to specify whether they want to prohibit or allow subclasses from overriding a given method.

I actually think repaint is a great example of why opt-out makes more sense.

Clearly you do not want another coder coming along and modifying repaint - that would be dangerous. So in an opt-out world, you mark repaint as final and it can't be overridden. End of story.

But how many other methods in that class are both exposed to subclasses and dangerous to override? For most classes, the answer is "maybe one or two, usually none." If they're that dangerous, they're often private.

Yet in a virtual world, it is impossible to override even the non-dangerous ones unless the original developer took the time to mark every single non-dangerous method with virtual.

In my experience, final over virtual has been one of the nice things about using Java. Even in an API, I only find myself marking an extremely small percentage of my methods final, because for the vast majority of them, overriding will do no harm and will allow future developers to more easily build on what I've written.

As it stands, I can already see myself going through and adding last-minute virtual declarations to my own code on a case-by-case basis every time I realize I want to override after all. Future developers building off an API where such an omission has happened will not have that luxury.

katox Thu 12 Aug 2010

See Anders Hejlsberg's explanation. Fantom takes C# route here.

Side note: virtual by default favors inheritance (to be able to use polymorphism at all) to object composition. This sounds good in theory but in practice object composition is much less fragile. It is a logical outcome because you make less assumptions of how the actual implementation works internally. Even if you get everything correctly - will it last?

jodastephen Thu 12 Aug 2010

I'd note that Anders' first point about performance is pretty moot - the JVM hotspot handles that pretty nicely whether it is virtual or final now.

My gut feeling is that Fantom's choice here is wrong. But unlike some other areas where I'm certain its wrong, in this case I don't have the same certainty. My key argument is that Fantom is fundamentally relaxed about its type system, yet this rule is much harsher than that in Java. Thus, it feels like it doesn't fit in.

cbeust Thu 12 Aug 2010

In a virtual-by-default language, you could end up with nonsense by overriding the wrong method.

We actually have a huge amount of data to evaluate the validity of this scenario: Java.

As everybody knows, Java allows overriding by default. I have been programming in Java for almost fifteen years and I can't even remember when I accidentally overrode a method and this action led to "nonsense".

Anyone else?

It just doesn't happen in practice. And as the original poster pointed out, the opposite holds true: many, many times, I have found it very convenient to be able to override methods from base classes even though the author probably didn't consider that somebody might want to do that one day.

Going back even further, I remember having the same concerns with C++ and I can even remember a few times where I ended up doing "#define private public" in order to get my work done.

I would vote a strong +1 in favor of removing the requirement for "virtual".

I also think that when used correctly, checked exceptions are a very powerful tool that increases software robustness, but let's save this for another time.

cbeust Thu 12 Aug 2010

By the way, I found Anders' arguments for virtual pretty weak back then (the interview dates from 2003) and now, I think that they are simply contradicted by facts. In all these years, the two problems he mentions (performance and versioning) have simply never happened to me, and I would claim, not to the wide Java world either, or we would have heard about it.

People complain about a lot of things about Java but I don't think I have ever heard anyone unhappy with the "default is virtual" approach.

katox Thu 12 Aug 2010

I can't even remember when I accidentally overrode a method and this action led to "nonsense". Anyone else?

@cbeust bad things happen

I'm just curious, during those 15 years - have you never encountered a situation when someone had intentionally (not accidentally) overridden a method on a framework object and things broke horribly on next minor framework update just because of that?

In all these years, the two problems he mentions (performance and versioning) have simply never happened to me

No maven issues? No JSR277 interest?

Anyway my vote is +-0 - I don't care about this too much. I'd rather see a delegation simplification proposal or something like that.

brian Thu 12 Aug 2010

Methods are non-virtual by default because I believe class designers must explicitly design for overrides. I love OO as practical mechanism to package data and functions together. However when it comes to subclassing and polymorphism, I don't necessary think that is always the best solution. When polymorphism is desired, I think it needs to be explicitly designed.

Anyone who has designed a library faces these decisions. In my experience a well designed class should have lots of public methods for access, but only a few well defined override points.

Over the long term, I think it worth giving up a bit of flexibility to safely version classes robustly. Overriding methods which weren't intended as override points creates fragile code. Unless you are calling super, how do you know that you indeed did everything you needed to do? Suppose other internal code was using that public method, and you subtly broke some behavior?

If you do forgot to mark a method as virtual, it is a non-breaking and trivial to change your mind later. On the other hand the reverse (adding the final keyword) is a breaking change.

cbeust Thu 12 Aug 2010

Brian,

I think you are glossing over the real world case scenario: when you need to override a method that's not virtual and you can't make it virtual because it doesn't belong to you.

I suspect you're not sensitive to the problem because 1) you are writing Fan, Fan libraries and tools, and 2) you own the entire stack so you can change whatever you want in it.

Now, let me ask you this: how often have you had to go add a virtual keyword on a base class while working on Fan?

If it's even more than a few times, this should cause you to pause and realize that the flexibility of overriding something that the author didn't envision far outweighs the hypothetical (and very rare, as I claimed above) case of breaking something by an unfortunate override.

cbeust Thu 12 Aug 2010

By the way, I find C#'s approach to this problem really horrible.

You have three different keywords, virtual, override and new, that you can mix and match to achieve all kinds of weird scenarios, such as hiding methods. Check this out for an illustration.

At least, Fantom will simply refuse to compile your code if you attempt to override a method that was not declared virtual.

I still stand by my claim that methods should be overridable by default, though :-)

DanielFath Fri 13 Aug 2010

Not to mention it kinda conflicts with (or maybe it is caused by) with public by default attitude.

andy Fri 13 Aug 2010

This has been an interesting thread. I'm surprised people really liked Java's model so much - I thought that was one of the worst design decisions they made - and was very glad to be rid of it.

As an API designer and consumer I find it about 100x easier to write and consume libraries when things are marked virtual. I don't have to read docs to understand the intent of APIs - the code speaks for itself. And you never have to worry about someone breaking things by modifying behaviors unintendedly. Overall just a much cleaner and safer design.

rfeldman Fri 13 Aug 2010

It's cleaner and safer if the APIs you are consuming work as advertised.

More than once I have used a 3rd-party API that turned out to be broken or inadequate for the specific use I needed it for. In those cases, the fact that I was able to override certain unremarkable (certainly not the type you'd bother to mark virtual) methods was invaluable.

tcolar Fri 13 Aug 2010

I see the 2 sides of this, as I've used many huge libraries and wrote some.

As a library designer you might think:

  • I'm gonna make a "perfect" design and people will use it the way it's intended.
  • I don't like users to override random things, because after i changed some "non-virtual" code, they'll upgrade and complain their code broke.
  • Users will misuse my code and support will be a pain.

Now as a library users, I disagree because:

  • You might think your "design" is perfect ... but maybe I want to "misuse" your library - You might call it misuse, or you might call it creative use the designer didn't think about(very common) .... for example I buy a PVC pipe to build a greenhouse ... not intended use but works well.
  • Maybe there is a bug in the library ... now if i can replace any method (all virtual), this has huge advantages: 1) I can find the bug and try a fix. 2) I can use my fix until you fix the library(which might take a while).

Personally I work with a gigantic library / API's when using SAP java stack ... they love making their stuff private & final, and many time when i could have fixed something with a 1 liner override instead I get to wait a month for them to provide a fix ... very frustrating.

Personally I feel "virtual by default" is "freedom" as in i can do what I want, not like an iPhone :)

Anyway what I'd like myself, is that non-virtual methods can be overridden ... however I do understand that if i override a method that is not marked virtual, i'm on my own and I'm taking a risk.

I still like the virtual keyword as a marker of API's the designer intended to be overridden however.

It would be cool if there could be something like forced_override somemethod .., (lame keyword, but that's an example), because 1) I do acknowledge I'm doing something not planned by the lib designer and taking my chances 2) I can easily find where I did so later if I have issues.

....

helium Fri 13 Aug 2010

As somebody who thinks inheritance from concrete classes is a design smell I love Fantom's decision of non-virtual as default as it often leads to better design.

Oh and I don't like the idea of "fixing" a badly designed library by a quick hack only to see it break with the next release of the library.

Anyway what I'd like myself, is that non-virtual methods can be overridden

A non-virtual method is directly called without a vtable so no runtime polymorphism, so forced_override would not do what you think it does.

ivan Fri 13 Aug 2010

If Java way is so good, why in 1.5 @Override annotation have been added? ;)

casperbang Fri 13 Aug 2010

I'm pretty sure I've heard/read (Practical API Design and the NetBeans code base) of cases where Java's virtual-by-default gets in the way much as Anders describes back in 2003.

Anyway, I like that you have to intentionally mark with virtual. The GoF taught us to be caution of implementation inheritance unless you specifically design for it (i.e. a framework).

rfeldman Fri 13 Aug 2010

As somebody who thinks inheritance from concrete classes is a design smell I love Fantom's decision of non-virtual as default as it often leads to better design. Oh and I don't like the idea of "fixing" a badly designed library by a quick hack only to see it break with the next release of the library.

Agreed that it is a code smell, and agreed that I don't like fixing a badly designed library this way.

However, what I love more than that is that Fantom is all about practicality.

Consider:

Case 1: Spend 2 hours fixing a broken corner case in an otherwise functional library, and another 2 hours modifying the fix after the next release comes out.

Case 2: Spend 40 hours rewriting the library from scratch.

To my mind, the reason a code smell is bad is that it's a red flag that you may end up spending a lot of time maintaining or expanding that piece of code in the future.

Spending a huge amount of time up front to avoid a code smell is like replacing your car's engine when your oil is running out - driving without oil is scary because it means you might have to replace your engine.

I'm pretty sure I've heard/read (Practical API Design and the NetBeans code base) of cases where Java's virtual-by-default gets in the way much as Anders describes back in 2003.

Obviously some number of cases exist where either way proves flawed. The question is which is better on average.

I actually was not even aware anyone had a problem with the way Java does it. All the Java shops where I've ever worked have taken the attitude of "don't override unless you need to," but when you do need to, you sure are glad you have the option.

rfeldman Fri 13 Aug 2010

This kind of reminds me of the static vs dynamic typing debate, actually.

Done wrong, dynamic typing can get you in trouble, but done right, it can save you time.

Done wrong, final can lead to trouble, but done right, it can save you time.

qualidafial Fri 13 Aug 2010

I agree with tcolar's analysis, and lean toward the non-virtual by default way like it is now. It is a nightmare trying to evolve an API in Java, you have to make every new API final if it's not designed for extension. If you ever forget to do that and release that way then you're trapped.

I completely agree about it being inconvenient on the client side, but I think the long-term health of APIs is benefitted more by making virtual methods explicit.

Just an aside: at Eclipse we've sidestepped this whole debate by introducing a @noextend javadoc tags for APIs. This way you leave open the door for clients to extend/override methods but expressly state that such usage is unsupported.

brian Fri 13 Aug 2010

I don't want to shutdown the discussion, because I think this has been a great one.

But I do want to set expectations straight - at this point, the bar for making a breaking change in the language is only to fix a serious language flaw. This issue is more of an opinion issue and which set of trade-offs you prefer. At this point my main goal is to keep the language and APIs stable for a fairly long beta period.

jodastephen Fri 13 Aug 2010

Clearly opinions are divided. Its basically a question of who has power - the library designer or the user. Fantom tends to choose the user's side in most language choices. But not this one.

A relatively simple change could satisfy both sides I believe.

Allow methods to be marked in one of three ways:

  • virtual - can be overridden and is part of the public API intended to be overridden
  • nothing - can be overridden, but is "use at own risk"
  • final - cannot be overridden

Callers would still have to use override. Tools could choose to add a warning or color scheme when overriding a non-'virtual' method.

This scheme (which is backwards incompatible), has the best of all worlds. It still allows library developers to seal their APIs if they need to, and to advertise supported extension points. But it doesn't lock the library down to make it unusable when bugs occur (which is a really vital point and one the current approach has no answer to).

The approach is also less scary to arriving Java developers (who might be more common after today's news!)

Brian/Andy, I know you want to resist breaking changes, but I have a strong gut feeling that you've made this choice just too tight for the language.

rfeldman Fri 13 Aug 2010

+2

Love it!

That would satisfy Brian's point of avoiding non-breaking language choices (it would only be breaking if virtual were being removed) and would satisfy Andy's point of "When things are marked virtual...I don't have to read docs to understand the intent of APIs - the code speaks for itself."

And of course it would satisfy those of us who wish overriding were permitted by default. :)

tcolar Fri 13 Aug 2010

Also agree with JodaStephen.

@jodastephen: What today Java news did I miss ?

cbeust Fri 13 Aug 2010

+1 to JodaStephen's proposal, this is a very Fantomy approach as well (pragmatic compromises that satisfy both library and application writers).

brian Fri 13 Aug 2010

Callers would still have to use override. Tools could choose to add a warning or color scheme when overriding a non-'virtual' method.

The problem is that we then introduce warnings versus errors. And I hate warnings (I thought you did too). Today the only warnings the compiler emits are when using deprecated APIs. I don't want to have any warning based around language features.

And if you did it for non-virtual, then I think you would also have to do it private, internal, and protected too.

tactics Fri 13 Aug 2010

It's no good to have the default expose risk.

Personally, I think subclassing is over-valued by OOP advocates. It should be kept to what it's really good at (widget libraries, plug-in architectures, etc) and be left alone otherwise.

However, what makes it excel in the domains it's useful for is the ability to spell out protocols. The virtual keyword is putting these protocols down in pen rather than in pencil.

Only a very small subset of a class's methods will ever be overridden. Looking at fwt::Widget (just for example), I can't think of any sane argument for allowing pack, add, children, each, parent, visible, etc to be overridden.

Perhaps bounds could be overridden, but doing so could easily break other methods (like pack) which rely on invariants which break down when you redefine bounds. And this is my original point I tried to make above: overriding a method that isn't intended to be overridden requires that your code understands the API's implementation details. In other words, misuse breaks abstraction.

So I'm in the "keep things as they are" pile.

cbeust Fri 13 Aug 2010

I can't think of any sane argument for allowing pack, add, children, each, parent, visible, etc to be overridden.

It's perfectly fine not to be able to think of such arguments but deciding that nobody else ever will is a bit arrogant. And by the way, I can think of several scenarios where I ended up overriding similar methods in other windowing toolkits.

Putting this in light of Jodastephen's suggestion, the author of this class would simply have to ask themselves: "Would overriding this method completely break my class?".

If the answer is yes, make it final. If the answer is no, don't put any restriction on the method.

This is what I like about this suggestion, by the way: it destroys the false dichotomy about methods exposed in an API. It is incorrect that a method should either be overridable or not, there are actually at least three choices:

  • This method should never be overridden or things will break
  • Overriding this method might break things but it should be okay most of the time
  • This method is intended to be overridden

I like how Jodastephen's proposal neatly addresses these three scenarios.

tcolar Fri 13 Aug 2010

@tactics, kinda funny because you picked that example, because the one time I ran into this issue with fantom was actually with a widget.

IIRC, I was trying to make a custom text area (RichText with extra features) ... and I ran into a problem where the area borders where hard coded in a method that was not override-able (create()? i think it was) ... this caused issues because in my case I did want to make "Excell" type textfields and did not want that border.

It could have been easy to override the method to just call the original(super) and add a one liner to adjust my border ... but not override-able so couldn't do it, by the way that kind of super call with an extra thing before or after is not even dangerous to start with (scenarios where it's the only option).

Anyway in my experience there are times that come up where you want to override something that wasn't planned to be ... oftentimes you can do it other ways than overloading .... but not always, in particular if the method in question is using other protected methods and scenarios like that.

I'm not saying it's something that you should do a lot or that happens every day, but I don't believe the "designer" can plan for every use of their library ahead of time, and to me it's at least often better to override ONE method, than having to rewrite the whole thing from scratch or worst .... copy the whole code and change one line(not that I would do that)

tactics Fri 13 Aug 2010

It's perfectly fine not to be able to think of such arguments but deciding that nobody else ever will is a bit arrogant.

I'm not trying to be arrogant. But if I can't think of any obvious uses, then if I do come across some use, it must be non-obvious. In other words, a clever programmer might find interesting uses, but "clever" is not a good thing in programming :)

The problem I have with this proposal is that the question "can I override this method" is a yes or no question. It is a true dichotomy, because it can only be allowed or disallowed.

The distinction between #2 and #3 is a social distinction, not a programmatic one. What would be the difference to the compiler? If the two are not sufficiently distinct, the proposal is simply to change the default to virtual indirectly.

rfeldman Fri 13 Aug 2010

But if I can't think of any obvious uses, then if I do come across some use, it must be non-obvious.

The chance that a sensible, non-obvious override will arise for any given piece of code approaches 100% the more people make use of that code. No developer is omniscient, and people use APIs in a breathtakingly large variety of different ways.

But whereas with final the only way an API consumer can end up with a bad outcome is if he causes it himself (by overriding something and breaking it), with virtual (as currently implemented) an API consumer can very easily encounter a bad outcome over which he never had any control - because the API developer messed something up, or couldn't think of a reason to let something be overridden, and lo and behold, that situation came up anyway.

Brian and Andy currently do a fantastic job writing Fantom APIs, but sooner or later someone will write a very useful API with holes in it, and on that day I really hope Fantom has changed the virtual policy, because otherwise I'm going to have to throw that 99%-quality API in the trash can - all because I didn't have the ability to patch up one small deal-breaker deficiency.

The bottom line is that this policy punishes competent API consumers in an attempt to save incompetent ones from themselves.

helium Sat 14 Aug 2010

+1 for keeping the language stable and not making this change.

jodastephen Sat 14 Aug 2010

> Callers would still have to use override. Tools could choose to add a warning or color scheme when overriding a non-'virtual' method.

The problem is that we then introduce warnings versus errors. And I hate warnings (I thought you did too). Today the only warnings the compiler emits are when using deprecated APIs. I don't want to have any warning based around language features.

I do dislike warnings. Warnings produced by the compiler. This would be a warning produced by a third party tool - developers would have to choose to use it (although such a tool might usefully be implemented by adding an extension point to the Fantom compiler for warning plugins).

So, yes, I agree. No warnings for core language features.

And if you did it for non-virtual, then I think you would also have to do it private, internal, and protected too.

Not sure I follow. My proposal applies wherever virtual/'override' is used today.

As this discussion has gone on, I personally have moved away from the "current design might work" to "current design is wrong". Its just too prescriptive to users of libraries to prevent them from overriding methods like this. We don't have monkey patching, which is the other alternate solution.

brian Sat 14 Aug 2010

I do dislike warnings. Warnings produced by the compiler. This would be a warning produced by a third party tool -

That doesn't sit well with me. The whole point of explicit virtual/override is to cleanly encapsulate what should and shouldn't be used polymorphically. If the user overrides something that wasn't designed to be be overridden, then to me that is an error. Developers rarely explicitly design and document for overrides. That is why I don't really see some compromised half-way point.

To be frank, I find the arguments for making things non-virtual by default unconvincing, and usually I think I try to see both sides fairly. The only real argument is to "fix a bug in a class library" or "use a library in a way it wasn't designed". Is that really a solid argument? And in order for this "fix" to even work the library must be a) closed source, b) must be on a public class, that c) you have access to instantiate yourself versus the library, and d) not be explicitly marked final.

I definitely take a middle of the road approach to type systems, but I do not carry this middle road of the road approach over to encapsulation. To me encapsulation is the absolute most fundamental principle in software engineering. Designing clean interfaces between modules and classes and clearly dictating what should and should not be used/overridden is just basic good software engineering. It is how you build large software systems with big teams, even when those team members aren't the best programmers.

So I'm not seeing this. For a blue collar language like Fantom, strict encapsulation rules trump flexibility.

But as I've said before, at this point we are debating a language decision made years ago. At this point that ship has sailed. At some point we have to stop breaking stuff and just ship it. And that is why I don't want to make any breaking changes unless it fixes a serious language flaw. This is more of an opinionated issue, not that something that is fundamentally broken. And even though many people share the opposing opinion, it is by no means anonymous. I won't consider a breaking change in the language at this point unless the community is truly 100% behind it.

cbeust Sat 14 Aug 2010

Brian,

I'm aware that it's very unlikely that this will change at this point, but I'm still curious to get an answer to the question I asked you:

While writing Fan code over these past years, either library or tool, can you estimate how many times you have wanted to override a method, realized that it was not declared virtual, gone to the base class, mark it virtual and resume your work?

brian Sat 14 Aug 2010

While writing Fan code over these past years, either library or tool, can you estimate how many times you have wanted to override a method, realized that it was not declared virtual, gone to the base class, mark it virtual and resume your work?

Actually that is a great question. I am sure I have done that at some point, but it is extremely rare. Usually I design a given method as a virtual override point or not to begin with. And when I do decide to make something virtual it usually involves some refactoring. But I am probably an odd case, because I am fairly obsessive about trying to keep my public interfaces as small and strict as possible.

In Fantom, I actually use virtual methods very sparingly. I tend to use functions, composition, reflection, and duck typing much more heavily than polymorphism.

That is just a gut feel, so I decided to run some statistics across both the Fantom and SkySpark (our commercial) codebase:

pods:       55
types:    1012
methods:  4372
abstracts: 253  5%
virtuals:  315  7%
overrides: 1020 23%

These stats exclude non-public stuff and the Obj methods like equals, hash, etc. So only 7% of all my methods are declared virtual. Although 17% are overriding something (I am not counting overrides as virtuals here even though most overrides are both).

Of that 7% there are a couple of big anomalies where we use polymorphism extensively. Here are pods that declare the most virtual methods:

compiler 66  (Visitor, AST classes)
sys      52  (Env, InStream, OutStream)
fwt      45  (UI stuff)
flux     24  (UI stuff)
build    15
web      16
util     12  

What is interesting is (in my code at least) is that polymorphism is heavily used in only a few pods. Out of 55 total pods, only 7 use more than a dozen virtual methods and 32 use no virtuals at all.

If I remove the sys, compiler, and UI code from the picture then virtuals drop to 4% of all methods used:

pods:       55
types:     617
methods:  2342
abstracts: 162  6%
virtuals:  116  4%
overrides: 523 22%

So I definitely think polymorphism suits some designs well - most notably UI code. But I think over the years, I've tended towards less and less polymorphism and very shallow class hierarchies.

Edit: original numbers included statics/constructors which skewed results by a couple percentages.

cbeust Sat 14 Aug 2010

Thanks for the detailed answer, Brian.

I still think that in this particular case, Fantom is giving preference to purity over practicality, which is very unusual, but I respect your choice.

qualidafial Mon 16 Aug 2010

I vote to leave it how it is.

I'm fully aware of the constraints this puts on API users. However the alternative tends to paint API designers into a corner, which in the long term holds back API evolution (assuming you take backward compatibility seriously).

dgt79 Mon 20 Feb 2012

I've just started learning Fantom and everything is pretty great so far, except the virtual keyword. My opinion is that it's introduces an un-needed friction; from experience, there have been many times when being able to override methods, defined in other libraries, has proved to be the only viable solution. For a language that offers both static and dynamic capabilities, the restriction implied by the "virtual" keyword it's a bit odd. Instead of giving developers the freedom to extend current libraries, the virtual keyword kind of stops the growth...

I'd really like Fantom to be that silver bullet... less verbose than Java, less complicated than Scala, great metaprogramming capabilities as in Ruby

StephenViles Mon 20 Feb 2012

I recommend Joshua Bloch's guidance on avoiding inheritance problems in Effective Java, Second Edition:

  • Item 17 "Design and document for inheritance or else prohibit it" - implemented in Fantom by the virtual keyword
  • Item 16 "Favor composition over inheritance" - how to implement the Decorator pattern using a wrapper class

Login or Signup to reply.