#822 FantomIDE 1.4.3 (Formerly Netbeans FanIDE)

tcolar Wed 18 Nov 2009

I released a new version (1.4.3)

http://fantomide.colar.net/home

Changes:

  • Changed name to FantomIDE (of course :) ) - new "artwork" too.
  • Updated for Fantom 1.0.47
  • Multiple bug fixes to the editor and grammar.

I've been using Fan , er ..., Fantom! for a project of mine, and since I'm using FantomIDE as my editor I was able to find and fix multiple small bugs in the IDE.

Still no completion, I haven't had time to work on it lately, to be honest. I just restarted in the last few days and have some of it working but it's disable for now until i complete it.

DanielFath Wed 18 Nov 2009

Yeah it is a great little IDE. Even without the auto-completion it is highly usable.

tcolar Fri 11 Dec 2009

I still have some work to do, but here is a "teaser" demo of the current completion support:

http://bit.ly/4EanYs

I should release a version with completion before the end of the year.

KevinKelley Sat 12 Dec 2009

Very nice! I like that it's showing the fandoc as you're scrolling through the choices; seems like exactly the information I'm always (really, literally always) tabbing back and forth to a browser for. This is going to be pretty great.

tcolar Sat 12 Dec 2009

Yeah it is. Pretty useful when you are unfamiliar with the API's

brian Sat 12 Dec 2009

This is really awesome tcolar.

Can you talk a little about the design you've used with regard to which parts of Fantom's compiler you are using versus which parts you are implementing from scratch (or reusing in NetBeans)?

How much of the typedb are you using versus your own backend structures?

tcolar Sat 12 Dec 2009

I guess I should make a proper blog entry on my site, but here is the idea:

Netbeans has it's own parsing/indexing API's http://wiki.netbeans.org/ParsingAPI

Note: Lots of the code is still completley under construction, so don't judge it too much :)

Parsing

But as far as doing the parsing itself it's just interfaces for you to implement, so basically you are left to implement it, the goal being to get an AST tree.

Now most lnaguage compilers are very "strict" and will fail, and not build an AST at all, if there are errors in the source. Now in the case of an IDE, you dos till want an AST even though the source has errors (say an incomplete expression being typed).

For this reason I use ANTLR which is a "compiler generator" which does support recovery from missing items (not all, you have to deal with some manually). So anyway you write the grammar:

http://svn.colar.net/Fan/src/net/colar/netbeans/fan/antlr/Fan.g

And it generates a (huge) Lexer and Parser for you.

Really writting that grammar is the most tricky part, once you have thatw orking you got a Nice AST tree to work with. Did some extra work on presenting a "nice" error to the user, because ANTLR errors can be very cryptic.

If I was to do it over, I would probably have used RATS! which is another compiler generator that seem much simpler to use and understand, on the other hand it appears to be a little less adept at error recovery.

Now with that AST I can do things like code folding, Navigator(list of methods, structures) and highlight/color items. You can also show the user some basic semantic errors.

Here is an example of a piece of the AST tree (method): Code:

Str somefunc(Int a, Str s)
{
    Int toto := 45
    Str tata := ""
    tutu := s.capitalize
}

AST tree (flattened):

(AST_METHOD (AST_ID somefunc) (AST_TYPE (AST_ID Str)) (AST_PARAM (AST_TYPE (AST_ID Int)) (AST_ID a)) , 
(AST_PARAM (AST_TYPE (AST_ID Str)) (AST_ID s)) (AST_CODE_BLOCK { (AST_LOCAL_DEF (AST_ID toto) 
(AST_TYPE (AST_ID Int)) := (AST_TERM_EXPR (AST_CHILD 45) AST_CHILD)) (AST_LOCAL_DEF (AST_ID tata)
 (AST_TYPE (AST_ID Str)) := (AST_TERM_EXPR (AST_CHILD (AST_STR "")) AST_CHILD)) 
(AST_LOCAL_DEF (AST_ID tutu) := (AST_TERM_EXPR (AST_CHILD (AST_STATIC_CALL (AST_TYPE (AST_ID s)) 
(AST_CHILD (AST_ID capitalize)))) AST_CHILD)) }))

Completion

For the completion, you need 2 pieces:

  • First you need to resolve the type being completed, wether a simple type, a call, a static call or an expression.

To be able to resolve anyhting you need to deal with scoping (ie: what vars, types etc.. are defined and accesible at the given location) So the first thing was to take my ATLR AST tree and transform that into a "Scope tree"

It starts with the RootScope which is the document roots, we use it to store all the declared imports (using), it's children are all the types which contains declared fields, inherited items(mixins, superclass) and so on. This might have some method children with parameters. Ant then after that all other subscopes are just standard ScopeNodes that might contain local var defs etc...

Note: While building this scope tree, I can detect many errors like unresolveable imports, variables/fields declared twice and many other things (only done a bit of this for now, as curent goal is completion, will add some later).

Now when sombody try to complete I use the scope tree to resolve the variable/expression being completed and if i succeed at resolving it, then I can propose the proper options: http://svn.colar.net/Fan/src/net/colar/netbeans/fan/completion/

Resolving types The tricky part of completion is resolving the type being completed, the easy part is to find a "standard" type, for example doing reflection using Fantom's type DB i can find easily my type ad propose it's slot.

The issue is that you might have multiple opened files that are NOT compiled yet (ths not available in the type DB), this is why the IDE usually does not use this, but intead create an index of all the documents and it's methods/slots(basically the RootScope data) and uses that instead (also guarantees you are alwasy using the "most recent" code, compiled or not)

Indexing

I'm still working on making a decision on implementing this, but the basic idea is:

  • Index all the sources data (scope infos) So when completion is requested you search first in that index, and if not found then you do reflection using the compiled pods.

Another use of the index will be to store the "where used" infos (weher is a var used), which can then be used for quick search / replace and eventually refcatoring.

Now netabeans Indexing method is to use a bunch of flat files for each source with some signatures in it, the index er for Java sources seem to store "signatures" generated with javac. Then it uses Lucene to store the where-used stuff.

I'm not sold on this aproach, I'm not digging Lucene and like even less the flat signature file approach they are using, btw Netbeans indexing is notoriously slow and I/O heavy and now I know why.

Anyway I'm cosidering one of the two following options right now:

  • Do it with the flat files, but at least use a better format, I could pretty much serialize my root/type scope nodes, and I'm thinking doing that with JSON would be very nice.
  • I can't help but think that would be better served by an actual database, that just makes more sense. I could use either javadb (standard with jdk1.6), or HSQLDB which I'm more familiar with and like a lot (can store in plain text files). And even if the db got messed up, loosing an index is not the end of the world.

I really feel like a DB is the way to go here, since it's already optimized for data storing/retrieval and cutom made implementaion with flat files most likely won't be nearly as good. Plus the DB is way better for searching obviously.

After completion

The next step will be to use the scope data to make some semantic analysis and warn the user about as many mistakes as possible, I have what I need, so it won't be very hard, just time consuming.

Do I win anything for longest post ever :)

DanielFath Sun 13 Dec 2009

Sir, have my babies :) Also you can have my cookies.

tcolar Sun 13 Dec 2009

Hu .. I have enough babies ... one of them being sicky at the moment ..

Please feel free to ship the cookies however :)

brian Sun 13 Dec 2009

@tcolar - great write-up, thanks for taking the time to share your design

Having real auto-complete support for Fantom in an IDE is illustrative that Fantom is very "toolable" (like Java or C#, but unlike other more truly dynamic languages).

Login or Signup to reply.