?

Log in

No account? Create an account

BloodSpell Development Updates

TOGLfaceS

BloodSpell Development Updates

TOGLfaceS

Previous Entry Share Next Entry
We've just realized we haven't really talked about our lipsynching tool, TOGLfaceS, even though we've been using it for a while. Whoops. Time to correct that little oversight.

Some people would be dissuaded from making Machinima in Neverwinter Nights (NWN) because it doesn't have any kind of built-in lip-synching. Not us. We got in touch the very clever Anthony Bailey, and asked him to give us some lip-synching, please.

He took advantage of the fact that NWN uses OpenGL, a standard application programming interface, to display it's output on your screen. It was possible to intercept the OpenGL commands from the game, and change them before sending them on their way - replacing one texture on a model with another one, for example. Do that with a bunch of textures, and you've got a kind of hi-tech flickbook you can apply to models.

Anthony quickly got back to us with TOGLfaceS - "Take Over GL face Skins". TOGLfaceS uses a text file to bind specific keys to in-game characters and expressions - using those keys will then let you puppet character expressions in game, swapping textures to make characters look happy, or sad, or change their mouth shapes and give them some lip-synching. We had our lip-synching tool. All we needed were lips to synch.

We had to create the textures we wanted to swap to give our characters facial expressions and mouth movement. Between Adobe's Paintshop, Editpad, a PLT converter, and the NWN Explorer, it was easy to create new textures for our characters, and models to view the textures on - but it wasn't so easy to get them looking acceptable. We're still refining some of our characters' faces now, and we're halfway through filming.



Our main characters needed 6 faces for each emotion:
- resting face with the eyes open and the mouth closed
- talking face with the mouth slightly open
- talking face with the mouth open
- blinking face with the eyes and mouth closed
- looking left
- looking right



Any bit-part characters who speak also needed a number of faces - usually, just the top 4 on our list. With characters having a number of expressions (normal, angry, frightened, happy, sad), that's a lot of faces. In fact, our character heads module (more on this in a minute) contains over 200 different faces for characters.



So. We'd made a couple of hundred heads for our characters. What now? Well, we needed to get those heads into the game. We put our created models and textures into a HAK (one of NWN's asset files) and made a game module, imaginatively called 'Character Heads'. This module is made up of half a dozen maps, filled with the disembodied heads of our characters - each with a different facial expression.

That sounds a little strange, but there's a perfectly reasonable explanation. TOGLfaceS can only apply textures that have been loaded into the game, so if we want a character to talk, we need to have loaded all of his talking head textures. Creating one module filled with heads and loading it before changing modules and starting shooting is more efficient than putting all of your character heads into all of your modules, after all. Efficiency also explains the decapitations - we don't change any other textures, and over 200 extra bodies means longer load times.

We'll be releasing TOGLfaceS when we have the time in our production schedule - if you're desperate to make NWN machinima, you may as well start making your faces now - you might be done by the time we release it.
  • (Very good explanation, f33b - thanks.)

    First the novelty disclaimer. It is worth noting that the idea of a GL interception layer is not a new one - indeed, TOGLfaceS was built on top of a fine existing GLIntercept codebase, without which life would have been considerably harder. Further, it was Hugh's idea to use this approach to add emotes and mouths to BloodSpell. I just did some lower-level conceptual details and the code of this little project.

    Second, a few notes on scope. The current implementation is somewhat hard-wired to the way the NeverWinter Nights engine does certain things (and to the way Strange Company capture their footage.) In principle the tool could be generalized to work with any GL-based game rendering engine that uses 2D textures for the detail of faces. (However, I fear the work to intercept the geometry of 3D-modelled faces - which are surely the way of the future - would be impractical.) A correspnding Direct3D interception layer is also quite conceiveable for rendering engines based on that technology (and at least for games, this is possibly more common.)

    Anyway, I hope some other otherwise expressionless engines get unlocked for character-based machinima using these techniques.
  • (Anonymous)
    Could you also employ distortion masks in addition to your more traditional animation approach? CrazyTalk is an example of using a distortion field, which is modeled on a 3D face, put over top of a 2D image. Also, given many of the close-ups in dialogue, you might do alot of lipsynching in CrazyTalk and save yourself a whack of time. For instance, you could stop having to produce hundreds of expression images and do your lip-synchs in a tiny fraction of the time. Worth checking out - just a thought.
    • CrazyTalk

      (Anonymous)
      Playing aroound with the CrazyTalk idea - here is an animation of Jared reciting Lord Byron: http://www.archive.org/download/jaredbyron/JaredCrush.wmv

      Was easy and quick to do. :)
Powered by LiveJournal.com