Scala uses attributes (at the moment) to help out the code generator;
things like [serializable] and [remote] are translated into additional bytecode. But what does it mean to generalize this notion? Writing a phase of the Scala compiler is fairly involved; but certain common code transformations might be more easily expressed. We have in Scala's reflect package case classes that constitute a mapping to the AST. Nsc has its own mapping as well. Rather than make progressively more opaque statements on the matter, I'll give a short example to convey the idea. First we define a "meta" attribute, to give us a base class for attributes that transform the AST (we'll give two "tagging" classes as well, for use later): abstract class meta extends Attribute; abstract class metatag extends meta; class generated extends metatag; Next we provide a base class for those that generate new tree information, based on a context: abstract class generate extends meta { def transform(genType: Tree, subject: Tree): Tree; } Now we have what we need for an example. Delegation is very common; we often have an outer object that implements an interface, but delegates all activity on that interface to an inner object or a member. Let's create a pseudo-code version of this kind of tree transformation; I am allowing type parameters to be specified here. Keep in mind that the transform method is executed inside nsc, and as such has a great deal of context available to it, including the type parameters. class delegate[FROM,TRAIT] extends generate { def transform(genType: Tree, subject: Tree) = { ...for (val method <- genType.typeParam(TRAIT))... ...findType(FROM).insert(mapping(method))... } } trait Helper { def op1(x: int): int; def op2(y: String): String; } class Outer extends Helper { [delegate[Outer,Helper]] val inner = new Inner class Inner extends Helper { def op1(x: int) = 0 def op2(y: String) = "Hello" } } If we compile Outer, it is transformed to this: class Outer extends Helper { val inner = new Inner [generated] def op1(x: int) = inner.op1(x) [generated] def op2(y: String) = inner.op2(y) class Inner extends Helper { def op1(x: int) = 0 def op2(y: String) = "Hello" } } It's obvious that there are many kinds of transformations like this that are possible. The [beanProperty] attribute that exists now could easily be generalized into a metaprogram attributes. Clearly the operations that are possible in the meta attribute classes should be quite limited, and overall these annotations should lean heavily in the direction of being declarative rather than procedural. Metaprogram attributes can be used for inlining, code generation, and much of what we think of as "aspect-oriented programming". It does require engaging with Scala's type system, but that consists of a small number of concepts that are highly orthogonal -- an "as simple as possible but no simpler" situation. Scala's class mixins work nicely to combine functionality: class log extends meta { ... } class checkSecurity extends meta { ... } abstract class pointcut extends meta; abstract class methodEntry extends pointcut { ... for each method in subject type ... } class logSecureMethodEntry extends methodEntry with checkSecurity with log; [logSecureMethodEntry] class Complicated { ... } Metaprogram attributes can also be applied to other metaprogram attributes, providing an open-ended recursive programming environment at compile time, specified in Scala itself and supported by normal nsc Tree functionality. The beauty of all of this (in my opinion) is that it requires no changes whatsoever to the Scala language or syntax. What it does is provide a means of extending the Scala compiler dynamically, by providing the library code the compiler needs to perform the transformations. I haven't researched this area but I think this moves strongly in the direction of a typesafe macro system (type parameters to metaprogram attributes can have type bounds), with the added benefit of being highly readable versus most macro systems. Metaprogram attributes get to use the same kind of pattern matching that the compiler does: abstract class treeFilter extends meta { def filter(t: Tree): boolean; } class getMethodFilter extends treeFilter { def filter(t: Tree) = t match { case Method(name, _, ...) if (name.startsWith("get") && name.length() > 3 && name.charAt(3).isUpperCase())) => true case _ => false } } Steps to make this real would be: 1. Choose tree representation (nsc internal or scala.reflect). 2. Provide small hierarchy of "base" metaprogram attributes for delegation, generation, pointcuts, etc, to prove the concept overall. These would hopefully do the dirty work of interacting with the nsc Tree. 3. Pick the right phase within nsc to do the work -- the metaprogram attribute base classes might carry the phase with them as a characteristic. We could actually define meta like this: abstract class meta extends Attribute { val phase: String } Execution of a metaprogram attribute is somewhat like a "micro-phase" within nsc -- a phase within a phase, where the micro-phase is further limited in scope to an area of the tree. One last "out there" example: class generateJDBCVars(jdbcURL: String, query: String) extends generate { var meta = _ inspect def inspect = { var connect: Connection = DriverManager.getConnection(jdbcURL) var q = connect.createStatement() var rs = q.executeQuery(query) meta = rs.getMetaData() rs.close() connect.close() } def transform(genType: Tree, subject: Tree) = { var x = 0 while (x < meta.getColumnCount()) { generateVar(subject, meta.getColumnName(x), mapSQLType(meta.getColumnType(x), ...) generateMethod(subject, "describe_" + meta.getColumnName(x), template("describe").set("description", meta.getColumnLabel(x))) x = x + 1 } } [template] def describe: String = { [valueParameter("description")] val txt = "" txt } } [generateJDBCVars("jdbc:...", "select * from invoice")] class InvoiceBean; Being able to "sculpt" code at compile time has a long and useful history...I think Scala's extraordinary flexibility may be the key from a user's point of view, and the modularization of its compiler the key from an implementation point of view. It seems to me that metaprogram attributes are highly complementary and orthogonal to Scala's existing facilities. I guess this all sounds like something I should just get right in there and build a prototype for, right? ;) I hope this is of interest to the Scala team. One of the things that irritates me the most is writing repetitive, boilerplate code. A compile-time transformation facility bridges the gap between static compilation and runtime reflection, and is more than effective for most purposes. Ross Judson |
Actually, I am working right now on a meta-programming (or staging)
system very similar to what Ross describes. > Steps to make this real would be: > > 1. Choose tree representation (nsc internal or scala.reflect). It uses a souped-up version of scala.Reflect. > 2. Provide small hierarchy of "base" metaprogram attributes for > delegation, generation, pointcuts, etc, to prove the concept overall. > These would hopefully do the dirty work of interacting with the nsc > Tree. That is one of the ideas. I am also working on a system that will apply transformation to expressions based on their type. In the end, both should be available to the meta-programmer. > 3. Pick the right phase within nsc to do the work -- the metaprogram > attribute base classes might carry the phase with them as a > characteristic. The first version of the framework will apply transformations just after type-checking (that is, after the refCheck phase), where the tree has been proved to be type-safe, but still looks like a Scala program (afterwards the tree “degenerates” in the sense that all of Scala's high-level features (pattern matching, objects, mixins etc.) are transformed to their Java implementation). But being able to pick the phase where the transformation should be applied is clearly on my radar. > Execution of a metaprogram attribute is somewhat like a "micro-phase" > within nsc -- a phase within a phase, where the micro-phase is further > limited in scope to an area of the tree. That is very much the idea. But I also plan to allow meta-programming transformations to be applied to live programs (i.e. at run-time as opposed to compile-time). This permits transformations relying on dynamic properties to be defined. Compile-time application then becomes (conceptually at least, because in practice it is a lot trickier) an optimisation of the general dynamic case. > Being able to "sculpt" code at compile time has a long and useful > history...I think Scala's extraordinary flexibility may be the key from > a user's point of view, and the modularization of its compiler the key > from an implementation point of view. It seems to me that metaprogram > attributes are highly complementary and orthogonal to Scala's existing > facilities. Yes, that is also the way I see it. > I guess this all sounds like something I should just get right in there > and build a prototype for, right? ;) I hope this is of interest to the > Scala team. So much that it happens to be my PhD topic :-) I don't really know when you can expect a prototype, but I hope something simple will be available before the summer. Cheers, Gilles. |
In reply to this post by Judson, Ross
I am glad to hear that this is of interest, and that a serious effort is
being directed at it. In my opinion getting this "right" is going to be one of the keys to Scala's overall acceptance, over the long term. The existing scala.reflect is a bit difficult to understand ;) What's needed are a few examples of how it can work. If you have a modified version of scala.reflect I'd like to have a look, if possible. I've spent a bit of time trying to implement the "delegate" metaprogram attribute -- it needs to be activated before refchecks, because refchecks will reject the definitions as incomplete. Completing them is the job of the delegator, which needs to insert methods. The delegation example isn't real metaprogramming, though -- it's just special case code inside the compiler, and that's not really the point of the overall exercise. One of the significant questions is whether generating the scala.reflect objects imposes a high cost on the compiler. This can be avoided by using lazy instantiation (only instantiating if a particular path is followed), but lazy instantiation doesn't work well with pattern matching. Perhaps metaprogram attributes can declare a "scope" within which they operate; the compiler need only provide partial information in that case. I've pasted below my current work -- it's an nsc phase that implements delegation as a metaprogram attribute. Usage is like this: trait Help { def add(x: int, y: int): int; def subtract(x: int, y: int): int; } class Example extends Help { [delegate[Help]] val helper = new Help() { def add(x: int, y: int) = x + y; def subtract(x: int, y: int) = x - y; } def add(x: int, y: int) = helper.add(x,y); } The delegation transformer ensures that the necessary subtract method is inserted and redirected to helper, which completes the class and allows it compile without an abstract class error. Note the hardcoding of name in the delegation; I've been trying to figure out how to generalize this in a manner that allows me to create attributes that are aware of nsc trees and generally able to manipulate them, while at the same time being fully separate from the compiler. It's a bit of a tricky situation as there's quite a bit of nsc machinery that needs to be available to metaprogram implementations. Providing a simplified "plugin" point deep into nsc's heart is a challenge (at least for the uninitiated). I make no claims about the correctness of the delegate transform, but the technique of using attribute type parameters to guide metaprogramming seems sound. It certainly makes for clean notation, as indicated by the usage example. RJ package scala.tools.nsc.metaform; import nsc.transform._ abstract class MetaTransform extends Transform { // inherits abstract value `global' and class `Phase' from Transform import global._; // the global environment import definitions._; // standard classes and methods import typer.{typed, atOwner}; // methods to type trees import posAssigner.atPos; // for filling in tree positions import scala.tools.nsc.symtab.Flags._; /** the following two members override abstract members in Transform */ val phaseName: String = "metatransform"; protected def newTransformer(unit: CompilationUnit): Transformer = new DelegationTransformer(unit); class DelegationTransformer(unit: CompilationUnit) extends Transformer { override def transform(tree: Tree): Tree = { val t = super.transform(tree) t match { case templ @ Template(parents, body) => { val delegations = for (val vd @ ValDef(_, _, _, _) <- body; val Pair(attrType @ TypeRef(_, sym, trt :: Nil), attrConst) <- vd.symbol.attributes; sym.fullNameString == "com.soletta.sl.delegate") yield Pair(vd, trt) ; delegations match { case Nil => templ case _ => copy.Template( templ, parents, body ::: redirect(delegations, templ.tpe.members)) } } case _ => t } } private def redirect(delegations: List[Pair[ValDef,Type]], mem: List[Symbol]): List[Tree] = mem match { case needed :: tail if (needed hasFlag DEFERRED) => { delegations.find(del => del._2.nonPrivateMember(needed.name) match { case NoSymbol => false case sym: Symbol if (sym.tpe.matches(needed.tpe)) => true }) match { case Some(Pair(vd, trt)) => val meth = newSyntheticMethod(currentOwner, needed.name, 0, needed.tpe) val sel = Select(gen.mkAttributedRef(vd.symbol), needed) val methDef = DefDef(meth, vparamss => { val appl = Apply(sel, vparamss.head map Ident) appl}) typed(methDef) :: redirect(delegations, tail) case _ => redirect(delegations, tail) } } case head :: tail => redirect(delegations, tail) case _ => Nil } def newSyntheticMethod(clazz: Symbol, name: Name, flags: int, tpe: Type) = { val method = clazz.newMethod(clazz.pos, name) setFlag (flags) setInfo tpe; clazz.info.decls.enter(method); method } } } |
Free forum by Nabble | Edit this page |