Categories
Uncategorized

gnupg very basically

I’m trying to get jslint4java into central, via oss.sonatype.org. Part of this requires that you use the maven-gpg-plugin to sign your artifacts. All well & good, but I’ve never used GPG before (though I’ve been playing with SSL certificates for years).

So, following along the howto, I did:

$ gpg --gen-key
gpg (GnuPG) 1.4.9; Copyright (C) 2008 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Please select what kind of key you want:
   (1) DSA and Elgamal (default)
   (2) DSA (sign only)
   (5) RSA (sign only)
Your selection? 1
DSA keypair will have 1024 bits.
ELG-E keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y

You need a user ID to identify your key; the software constructs the user ID
from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) "

Real name: Dominic Mitchell
Email address: dom@happygiraffe.net
Comment:
You selected this USER-ID:
    "Dominic Mitchell "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
+++++.++++++++++.++++++++++.+++++++++++++++.+++++++++++++++.+++++...++++++++++.+++++.+++++++++++++++++++++++++++++++++++++++++++++++++++++++>++++++++++>+++++......+++++
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
..++++++++++++++++++++.++++++++++++++++++++++++++++++...++++++++++++++++++++++++++++++.++++++++++++++++++++++++++++++.+++++++++++++++.+++++...++++++++++.+++++>.++++++++++>..+++++>+++++.......+++++^^^
gpg: /Users/dom/.gnupg/trustdb.gpg: trustdb created
gpg: key A24D5076 marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
pub   1024D/A24D5076 2009-06-24
      Key fingerprint = 2F2E 85D8 A945 41C2 B7D1  667A 8616 2CE5 A24D 5076
uid                  Dominic Mitchell 
sub   2048g/4C2D8074 2009-06-24

As an aside, I am using gnupg 1, as I had some issues with the maven-plugin and gnupg 2. And it was simpler to just install gnupg 1 than fix the issues. 🙂

This creates a bunch of files in ~/.gnupg:

$ ls -l ~/.gnupg
total 64
-rw-------  1 dom  dom  9154 21 Jun 20:39 gpg.conf
-rw-------  1 dom  dom  1171 24 Jun 20:44 pubring.gpg
-rw-------  1 dom  dom  1171 24 Jun 20:44 pubring.gpg~
-rw-------  1 dom  dom   600 24 Jun 20:44 random_seed
-rw-------  1 dom  dom  1320 24 Jun 20:44 secring.gpg
-rw-------  1 dom  dom  1280 24 Jun 20:44 trustdb.gpg

Next, it needs to be published on to one of the key servers. The default configuration comes set up with a keyserver keys.gnupg.net. You can send your key up there easily:

$ gpg --send-keys A24D5076
gpg: sending key A24D5076 to hkp server keys.gnupg.net

And now it’s published.

Integrating this with your maven build is fairly simple. The example configuration works exactly as expected. I did one thing slightly differently: I created a gpg profile, and then referenced that from the release plugin. That means I’ll only sign releases, not all builds. Which seems reasonable enough to me.


  
    
      
        
          org.apache.maven.plugins
          maven-release-plugin
          2.0-beta-9
          
            gpg
          
        
      
    
  
  
    
      gpg
      
        
          
            org.apache.maven.plugins
            maven-gpg-plugin
            …
          
        
      
    
  

Categories
Uncategorized

The Maven Ecosystem

Last night I went to see Jason van Zyl of sonatype talking about various bits of the maven ecosystem, and where they’re going. The main bit for me was what’s coming up in maven 3.0. There was a great deal of talk about OSGI related issues, but it reinforced my belief that whilst there’s some good technology in there, it’s still quite complicated to use and manage. Steps are being taken to address this (better tooling support), but they’re not there yet. Also, for the kind of things I do (simple, content-driven, somewhat static webapps), it doesn’t seem to be necessary anyway.

So what’s coming up in maven 3.0? Fundamentally, there won’t be that many new user-visible features (wait for 3.1!). Internally, there have been huge refactorings by the sound of things (along with integration tests to ensure no user-visible regressions). They’re switching away from plexus and towards guice + peaberry. But that’s internal detail. And in theory, it shouldn’t matter even if you’re a plugin author.

What sounded really nice was the focus on making life much easier for users of the embedded maven. Primarily, this means IDE authors. Things like plugin & lifecycle extension points, and incremental build support should allow m2eclipse to be much, much more intelligent about the work they do. Jason mentioned that a version of m2eclipse which builds on the trunk of maven 3.0 can now build the trunk of maven in seconds rather than minutes. Why? Because it’s not duplicating work that’s already been done by Eclipse.

The main change is to the artifact resolution system. It’s been one of the main source of bugs in maven 2.0. It’s been completely junked in 3.0 and replaced with mercury, which handles both transport and resolving artifacts. It should be better tested, and things like version ranges much closer to how OSGI does things.

One (minor) change is that the error messages should be much better. That’s a welcome relief.

There are other tidbits that I think are scheduled for 3.1 that should be really nice:

  • everybody’s favourite: versionless parent elements
  • attributes in the POM — hooray, that should make POMs vastly smaller.
  • mixin POMs — should allow much more flexibility in constructing dependencies on both groups of artifacts and groups of plugins.

There were further talks about hudson & nexus, but I’m fairly familiar with these, so I didn’t see much of news to me.

My thanks go to Peter Pilgrim for organising and EMC/Comchango for hosting.

Categories
Uncategorized

find(1) abuse

A colleague wanted to find all the files that end up *.disp or *.frag. Easy enough, right? In the shell you can say *.{disp,frag}.

$ ls *.{disp,frag}
foo.disp
foo.frag

Except that this doesn’t work with find:

$ find . -name '*.{disp,frag}'

Why not? Because braces aren’t globs, so they’re not supported by find(1).

What can you do instead?

Firstly, you can use the -regex flag.

$ find . -regex '.*.(disp|frag)'
./foo.disp
./foo.frag

This is particularly awful because find defaults to the emacs style of regexes (which took me ages to remember the details of) and means you end up with leaning toothpick syndrome. And you’re matching the whole filename, not a subset, so you have to have the .* on the front.

The second option is to use find’s expression language. Find THIS FILE or THAT FILE.

$ find . ( -name '*.disp' -or -name '*.frag' ) -print
./foo.disp
./foo.frag

This is a bit more readable, but you have to remember that the parentheses are quoted because the shell likes to munch on them. Overall, it does seem preferable to -regex though.

Like all these things, you can take it too far.

Categories
Uncategorized

Responsiveness

Yesterday, whilst trying out Eclipse 3.5, I noticed a problem with the new XSLT support. So, I filed bug 279793 (All XSLT 2.0 has validation errors in xslt-2.0.xsd).

I got a response in under 30 minutes, including a workaround and an integration into the next release. That is completely awesome!

I’d like to say a huge thanks to Dave Carver for his work, not just on fixing this bug, but also on getting the XSLT integration with Eclipse. That was something I’d been missing for a while.

Categories
Uncategorized

Eclipse 3.5 in Cocoa

I’m just trying out Eclipse 3.5-RC4. One of the big new features for me is that it’s now based on Cocoa instead of Carbon. There are many benefits to this, including being able to run on 64-bit Java 6. Fundamentally, it just looks and feels a little bit more mac-like.

As an example, one nice little mac feature I use a bit is “lookup this word in the dictionary”. If you hit Ctrl-Cmd-D and hover over a word, you get an in-place definition. Eclipse now does this:

Looking up a word in the dictionary inside Eclipse

It doesn’t work everywhere (which you can probably guess from the context of that screenshot), but it is an indicator that Eclipse & mac are coming together. This is great news for Java developers on the mac.

Oh, it does feel a little bit faster too, which can’t hurt.

Categories
Uncategorized

Google Analytics in XHTML

I’e been attempting to get Google Analytics to work correctly in both FireFox and IE6 for a site at $WORK. This is not normally a problem, apart from the fact that we’re serving up pages to firefox as application/xhtml+xml in order to get MathML support.

Now, the sample code from Google is pretty gnarly.


var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E"));


try{
var pageTracker = _gat._getTracker("UA-xxxxxx-x");
pageTracker._trackPageview();
} catch(err) {}

This fails in XHTML as document.write() isn’t there.

I tried a number of ways to get this to work.

  • Replace document.write() with some jQuery code to insert a script tag.
    • This didn’t work in IE6 — as the second script block ended up getting called before newly inserted script tag had loaded.
    • But I did find out that jQuery will replace script tags with Ajax calls for you. Which means you don’t end up with a script tag in the DOM tree, which is highly confusing when you’re looking for it in firebug.
  • Replace document.write() with native DOM calls to insert a script tag.
    • I did find the neat idea of adding an id to the script tag you’re currently in, so you know where to insert new DOM elements.
    • But it still failed, and for the same reason as above.

I was just about to start implementing something evil involving setInterval(), when I realised…

… this site will never use SSL!

So I replaced the code to generate a script tag with the script tag.

http://www.google-analytics.com/ga.js

try{
var pageTracker = _gat._getTracker("UA-xxxxxx-x");
pageTracker._trackPageview();
} catch(err) {}

Tada! If only I’d thought of this a few hours earlier… The moral is to be more aware of the context in which you’re doing something. Keep an eye on the “big picture” to use a particularly horrible metaphor.

Categories
Uncategorized

sandbox(7)

Like a lot of people, most of my Unix knowledge comes from an early reading of Advanced Programming in the UNIX Environment. This is an excellent tome on the interfaces provided by the kernel to programs on a Unix system.

Unfortunately, it’s over 15 years old now, and things have moved on. Naturally, I haven’t quite kept up. So I’ve just been pleasantly surprised to see that OS X has grown a sandbox system (via). There is scant documentation available:

Also, if you poke around, you’ll find /usr/include/sandbox.h and /usr/share/sandbox. The latter is interesting — it contains lisp-like definitions of access control lists for various processes.

What’s interesting to me is sandbox-exec though. This can be used with one of the builtin profiles to easily restrict access. For example:

$ sandbox-exec -n nowrite touch /tmp/foo
touch: /tmp/foo: Operation not permitted

After using strings(1) on apple’s libc (/usr/lib/libSystem.dylib), I managed to get these builtin profile names out:

nointernet
TCP/IP networking is prohibited
nonet
All sockets-based networking is prohibited.
pure-computation
All operating system services are prohibited.
nowrite
File system writes are prohibited.
write-tmp-only
File system writes are restricted to the temporary folder /var/tmp and the folder specified by the confstr(3) configuration variable _CS_DARWIN_USER_TEMP_DIR.

They’re only documented as internal constants for C programs, but it’s quite handy to have them available for sandbox-exec. It would be nice to know in more detail what they actually did, though.

Of course, this still isn’t really getting down to how the sandbox is implemented. Is it done inside the kernel or on the userland side? I don’t really know. And I don’t yet have enough dtrace-fu to figure it out.

See also:

Anyway, this seems like a fun toy. And of course, it’s reminded me that I need to try out chromium on the mac… Drat, no PPC support. 😦

Categories
Uncategorized

dependency complexity

I love the google-collections library. It’s got some really nice features. But, it’s not stable yet. They’ve explicitly stated that until they hit 1.0 it’s not going to be a stable API. So there are changes each release. Nothing major, but changes.

As an example, in the jump from 0.9 to 1.0rc1, the static methods on the Join class became the fluent API on the Joiner class.

(as an aside, could we have some tags, please?)

Following this change is simple.

@@ -310,7 +310,7 @@
         } catch (KeyStoreException e) {
             throw new RuntimeException(e);
         }
-        return Join.join(" | ", principals);
+        return Joiner.on(" | ").join(principals);
     }

     /**

But the knock-on effect comes when you start getting lots of things which have google-collections dependencies. At $WORK, I’ve got a project whose dependencies look like this.

dc2-deps-before.png

I wanted to extract a part of DC2 into its own library, commslib. This was pretty easy as the code was self contained. Naturally, I wanted it to use the latest version of everything, so I upgraded google-collections to 1.0rc1. Again, fairly simple.

This is what I ended up with.

dc2-deps-after.png

Except that now there’s a problem.

  • commslib uses Joiner, so it’ll blow up unless it upgrade DC2‘s google-collections to 1.0rc1.
  • GSK uses Join, so it’ll blow up if I upgrade DC2‘s google-collections to 1.0rc1.

And thus have I painted myself into a corner. 🙂

As it happens, DC2 had a dependencyManagement section forcing everything to use google-collections 0.8. → Instant BOOM.

The solution is to upgrade all my dependencies to use google-collections 1.0rc1. But this turns out to be a much larger change than I had originally envisaged, as now I have to create releases for two dependent projects. This isn’t too much of a hassle in this case (yay for the maven-release-plugin), but it could be a large undertaking if either of those projects is not presently in a releasable state.

I’m not trying to pick on google-collections (I still love it). I’m just marvelling at how quickly complexity can blossom from something so simple.