As part of the Facet Redesign, I've been rethinking pod level stuff. There are all sorts of pod level meta-level:
build configuration like srcDirs, depends, etc
documentation
meta-data like build time, build host, etc
Last summer I started moving everything into a "pod.fan" file which would use normal Fantom syntax for declaring documentation as a fandoc comment, facets using standard syntax, and symbols. But there are some big problems with that approach: writing documentation as ** comments is annoying for a big pod level doc. And having build meta-data trapped in facet syntax makes it extremely difficult to access without running the full compiler. Not to mention we don't actually have symbols anymore.
So to rethink this issue, first question is do pods have facets like just types and slots? It would be nice from a consistency perspective. But from a practical perspective the meta-data associated with pods tends to more ad hoc where the application of normal facet static typing is painful. It also causes big problems with regard to reflection because you have to load all those types to access pod meta-data.
So the following is my proposed design:
Pods don't have facets, just Types and Slots
Pod.meta is used to access arbitrary meta-data as Str:Str map
"pod.fan" goes away
Build configuration goes back into "build.fan", which for most pods will be the only file you need to compile a pod
"build.fan" can annotate Pod.meta with whatever it wants
Facet indexing will get stuck in Pod.meta for Env delegation
optional "pod.fandoc" is used for pod level documentation
"docLib" goes away and becomes "pod.fandoc" in each respective pod
tacticsThu 4 Feb 2010
I wasn't really a fan of pod.fan. It was awkward having the one-off file sitting next to the build.fan when most of the information inside it was build information.
andyThu 4 Feb 2010
This seems like a step (back) in the right direction. Have some concern over the pod facets - but not sure when you'd ever use one. And in that rare case, just using the meta data seems sufficient. So its probably ok.
katoxFri 5 Feb 2010
Just thinking aloud ... if pods get this sort of special type of treatment what about moving imports from each file into respective pods? Pods would be the The "assembly" place declaring dependencies, configuration etc.
brianFri 5 Feb 2010
Pods would be the The "assembly" place declaring dependencies, configuration etc.
That has always been the case. Your pod depends are what the compiler and runtime uses. The using statement is not about dependencies, but rather controlling the namespace of a compilation unit.
katoxFri 5 Feb 2010
Still replacing one pod implementation for another - even if pod interfaces were the same - is not a trivial task.
I have to make a third pod delegating to those two (aka slf4j) or do a mass replace of using statements. If the namespace wasn't global but pod defined internal classes wouldn't see anything else and it could be done by simply adjusting pod dependencies and namespace definitions (one place).
This is very hazy but I had to resolve similar problem several times in a large java project (due to major bugs in lib implementations or memory leaks) and it was a nightmare. Even though Fantom is far better in this regard I still feel class level imports might complicate this.
andyFri 5 Feb 2010
I see your angle - but you just setup yourself up for the opposite case - add an import and now your classes might have a naming collision and you have to go back and fix. I think the model is correct - the files that use the types control their own namespace - simple and robust - at the cost of the refactoring case.
brianFri 5 Feb 2010
New Type Database Design
New design is going to be lower level, but simpler and more flexible. Instead of indexing facets and using the static type system, each pod will declare a "index.props" file. The Env will be responsible for coalescing all the installed pods into a master index file (protentially with a key having multiple values).
For example you will now use a pod index prop to bind a URI scheme to a type:
sys.uriScheme.http=web::HttpScheme
This design makes it easy for Env's to manage their pod index, but still provides a simple, flexible foundation for building higher level plug-in function.
Once we have experience under our belt, I think we'll want to build higher level infrastructure on top of this using DSLs or facets to automatically generate appropriate key/value pairs, but for now it will be manual.
brian Thu 4 Feb 2010
As part of the Facet Redesign, I've been rethinking pod level stuff. There are all sorts of pod level meta-level:
Last summer I started moving everything into a "pod.fan" file which would use normal Fantom syntax for declaring documentation as a fandoc comment, facets using standard syntax, and symbols. But there are some big problems with that approach: writing documentation as
**
comments is annoying for a big pod level doc. And having build meta-data trapped in facet syntax makes it extremely difficult to access without running the full compiler. Not to mention we don't actually have symbols anymore.So to rethink this issue, first question is do pods have facets like just types and slots? It would be nice from a consistency perspective. But from a practical perspective the meta-data associated with pods tends to more ad hoc where the application of normal facet static typing is painful. It also causes big problems with regard to reflection because you have to load all those types to access pod meta-data.
So the following is my proposed design:
tactics Thu 4 Feb 2010
I wasn't really a fan of pod.fan. It was awkward having the one-off file sitting next to the build.fan when most of the information inside it was build information.
andy Thu 4 Feb 2010
This seems like a step (back) in the right direction. Have some concern over the pod facets - but not sure when you'd ever use one. And in that rare case, just using the meta data seems sufficient. So its probably ok.
katox Fri 5 Feb 2010
Just thinking aloud ... if pods get this sort of special type of treatment what about moving imports from each file into respective pods? Pods would be the The "assembly" place declaring dependencies, configuration etc.
brian Fri 5 Feb 2010
That has always been the case. Your pod depends are what the compiler and runtime uses. The
using
statement is not about dependencies, but rather controlling the namespace of a compilation unit.katox Fri 5 Feb 2010
Still replacing one pod implementation for another - even if pod interfaces were the same - is not a trivial task.
I have to make a third pod delegating to those two (aka slf4j) or do a mass replace of using statements. If the namespace wasn't global but pod defined internal classes wouldn't see anything else and it could be done by simply adjusting pod dependencies and namespace definitions (one place).
This is very hazy but I had to resolve similar problem several times in a large java project (due to major bugs in lib implementations or memory leaks) and it was a nightmare. Even though Fantom is far better in this regard I still feel class level imports might complicate this.
andy Fri 5 Feb 2010
I see your angle - but you just setup yourself up for the opposite case - add an import and now your classes might have a naming collision and you have to go back and fix. I think the model is correct - the files that use the types control their own namespace - simple and robust - at the cost of the refactoring case.
brian Fri 5 Feb 2010
New Type Database Design
New design is going to be lower level, but simpler and more flexible. Instead of indexing facets and using the static type system, each pod will declare a "index.props" file. The Env will be responsible for coalescing all the installed pods into a master index file (protentially with a key having multiple values).
For example you will now use a pod index prop to bind a URI scheme to a type:
This design makes it easy for Env's to manage their pod index, but still provides a simple, flexible foundation for building higher level plug-in function.
Once we have experience under our belt, I think we'll want to build higher level infrastructure on top of this using DSLs or facets to automatically generate appropriate key/value pairs, but for now it will be manual.